anchor
stringlengths 58
24.4k
| positive
stringlengths 9
13.4k
| negative
stringlengths 166
14k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
Our solution was named on remembrance of Mother Teresa
## What it does
Robotic technology to assist nurses and doctors in medicine delivery, patient handling across the hospital, including ICU’s. We are planning to build an app in Low code/No code that will help the covid patients to scan themselves as the mobile app is integrated with the CT Scanner for doctors to save their time and to prevent human error . Here we have trained with CT scans of Covid and developed a CNN model and integrated it into our application to help the covid patients. The data sets are been collected from the kaggle and tested with an efficient algorithm with the efficiency of around 80% and the doctors can maintain the patient’s record. The beneficiary of the app is PATIENTS.
## How we built it
Bots are potentially referred to as the most promising and advanced form of human-machine interactions. The designed bot can be handled with an app manually through go and cloud technology with e predefined databases of actions and further moves are manually controlled trough the mobile application. Simultaneously, to reduce he workload of doctor, a customized feature is included to process the x-ray image through the app based on Convolution neural networks part of image processing system. CNN are deep learning algorithms that are very powerful for the analysis of images which gives a quick and accurate classification of disease based on the information gained from the digital x-ray images .So to reduce the workload of doctors these features are included. To know get better efficiency on the detection I have used open source kaggle data set.
## Challenges we ran into
The data source for the initial stage can be collected from the kaggle and but during the real time implementation the working model and the mobile application using flutter need datasets that can be collected from the nearby hospitality which was the challenge.
## Accomplishments that we're proud of
* Counselling and Entertainment
* Diagnosing therapy using pose detection
* Regular checkup of vital parameters
* SOS to doctors with live telecast
-Supply off medicines and foods
## What we learned
* CNN
* Machine Learning
* Mobile Application
* Cloud Technology
* Computer Vision
* Pi Cam Interaction
* Flutter for Mobile application
## Implementation
* The bot is designed to have the supply carrier at the top and the motor driver connected with 4 wheels at the bottom.
* The battery will be placed in the middle and a display is placed in the front, which will be used for selecting the options and displaying the therapy exercise.
* The image aside is a miniature prototype with some features
* The bot will be integrated with path planning and this is done with the help mission planner, where we will be configuring the controller and selecting the location as a node
* If an obstacle is present in the path, it will be detected with lidar placed at the top.
* In some scenarios, if need to buy some medicines, the bot is attached with the audio receiver and speaker, so that once the bot reached a certain spot with the mission planning, it says the medicines and it can be placed at the carrier.
* The bot will have a carrier at the top, where the items will be placed.
* This carrier will also have a sub-section.
* So If the bot is carrying food for the patients in the ward, Once it reaches a certain patient, the LED in the section containing the food for particular will be blinked. | ## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | ## Inspiration:
The human effort of entering details from a paper form to a computer led us to brainstorm and come up with this idea which takes away the hassle of the information collection phase.
## What it does?
The application accepts an image / pdf of a medical form and analyses it to find the information which is then stored in a database
## How we built it?
We applied Optical Character Recognition techniques to evaluate data which was present in the form, and then formatted it to provide valuable information to the nurse/doctor
## Challenges we ran into:
Each person has his/her own way of writing and thus it was difficult to identify the characters
## Accomplishments that we're proud of:
We could come up with an MVP during these 36 hours by implementing a solution involving new technologies / workflows.
## What we learned?
We learned about the presence of several APIs providing OCR functionality which might differ based on the optimisation, also we learned more about client-server architecture in a way the patient's (client) request reaches nurse/hospital (server) asynchronously.
## What's next for Care Scan?
We would love to take this idea forward and integrate the solution with different services and regulations to provide an enriching experience to the user including but not limited to the scope of machine learning, NLP and event driven architectures.
## Link to Codebase
<https://github.com/Care-Scan> | winning |
## Inspiration
The inspiration behind BizSpyMaster 007 was to empower small businesses without dedicated IT personnel to access and harness the power of their sales data. Our goal was to create a platform that could seamlessly integrate various technologies, including MindsDB, Snowflake, Streamlit, PostgreSQL, Python, and the Open AI LLM. By providing a user friendly multilingual interface as an advisor in the form of AI. We aim to revolutionize the process of generating valuable business insights by providing accessibility and efficiency to our clients.
## What it does
BizSpyMaster 007 acts as a business insight advisor, it utilizes a combination of data analytics and AI capabilities provided by MindsDB. It provides users with access to global sales transaction data stored in the BIZMASTERDB.SALESSCHEMA.SALESFACTTB table. Clients can ask questions about their sales data, and the platform generate relevant insights using the data's integration with the AI model. This will help companies understand sales trends, customer behavior, and other valuable information without technical expertise or knowledge.
## How we built it
The project was built using the integration of several technologies, with Streamlit serving as the front end interface for user interactions. Snowflake and PostgreSQL were used as the data warehouse and retrieval solutions, allowing seamless access to the sales data. Snowflake database served as the primary source for feeding data into MindsDB, which is important to generate sales forecasts. We harnessed the potential capabilities of MindsDB's bridge between databases and LLM after attending a comprehensive evening workshop. We used Python as the core programming language and OpenAI LLM was employed for natural language processing and communication with users. The integration process involved establishing connections between various systems, and optimizing query performance.
## Challenges we ran into
Some of the challenges include maintaining performance of data retrieval from Snowflake and PostgreSQL. The integration between numerous platforms also required extensive research as it comes with its own requirements and complexities.
## Accomplishments that we're proud of
We are proud to have built a functional and user-friendly platform that revolutionizes the workflow in gathering business insights. BizSpyMaster 007 bridges the gap between business users and technical work, allowing a wider audience to be able to manage and handle business even without specialized IT personnel.
## What we learned
We learned that effective technology integration is complex but important. We learned how to optimize data retrieval processes and ensuring data accuracy as well as predictions.
## What's next for BIZSPYMASTER 007
Some of our future possibilities include expanding the datasets and types of questions the AI advisor can answer. We can also generate financial reports by adding integrations and provide a forecast more into the future. We will also focus on refining the user experience by making it more intuitive. Moreover, we aim to explore opportunities for scalability and improved integration with other business tools to provide comprehensive solutions for small businesses. | **What inspired us**
Despite the prevalence of LLMs increasing, their power still hasn't been leveraged to improve the experience of students during class. In particular, LLMs are often discouraged by professors, often because they often give inaccurate or too much information. To remedy this issue, we created an LLM that has access to all the information for a course, including the course information, lecture notes, and problem sets. Furthermore, in order for this to be useful for actual courses, we made sure for the LLM to not answer specific questions about the problem set. Instead, the LLM guides the student and provides relevant information for the student to complete the coursework without providing students with the direct answer. This essentially serves as a TA for students to help them navigate their problem sets.
**What we learned**
Through this project, we delved into the complexities of integrating AI with software solutions, uncovering the essential role of user interface design and the nuanced craft of prompt engineering. We learned that crafting effective prompts is crucial, requiring a deep understanding of the AI’s capabilities and the project's specific needs. This process taught us the importance of precision and creativity in prompt engineering, where success depends on translating educational objectives into prompts that generate meaningful AI responses.
Our exploration also introduced us to the concept of retrieval-augmented generation (RAG), which combines the power of information retrieval with generative models to enhance the AI's ability to produce relevant and contextually accurate outputs. While we explored the potentials of using the OpenAI and Together APIs to enrich our project, we ultimately did not incorporate them into our final implementation. This exploration, however, broadened our understanding of the diverse AI tools available and their potential applications. It underscored the importance of selecting the right tools for specific project needs, balancing between the cutting-edge capabilities of such APIs and the project's goals. This experience highlighted the dynamic nature of AI project development, where learning about and testing various tools forms a foundational part of the journey, even if some are not used in the end.
**How we built our project**
Building our project required a strategic approach to assembling a comprehensive dataset from the Stanford CS 106B course, which included the syllabus, problem sets, and lectures. This effort ensured our AI chatbot was equipped with a detailed understanding of the course's structure and content, setting the stage for it to function as an advanced educational assistant. Beyond the compilation of course materials, a significant portion of our work focused on refining an existing chatbot user interface (UI) to better serve the specific needs of students engaging with the course. This task was far from straightforward; it demanded not only a deep dive into the chatbot's underlying logic but also innovative thinking to reimagine how it interacts with users. The modifications we made to the chatbot were extensive and targeted at enhancing the user experience by adjusting the output behavior of the language learning model (LLM).
A pivotal change involved programming the chatbot to moderate the explicitness of its hints in response to queries about problem sets. This adjustment required intricate tuning of the LLM's output to strike a balance between guiding students and stimulating independent problem-solving skills. Furthermore, integrating direct course content into the chatbot’s responses necessitated a thorough understanding of the LLM's mechanisms to ensure that the chatbot could accurately reference and utilize the course materials in its interactions. This aspect of the project was particularly challenging, as it involved manipulating the chatbot to filter and prioritize information from the course data effectively. Overall, the effort to modify the chatbot's output capabilities underscored the complexity of working with advanced AI tools, highlighting the technical skill and creativity required to adapt these systems to meet specific educational objectives.
**Challenges we faced**
Some challenges we faced included scoping our project to ensure that it is feasible given the constraints we had for this hackathon including time. We learned React.js and PLpgSQL for our project since we had only used JavaScript previously. Other challenges we faced were installing Docker, Supabase CLI, and ensuring all dependencies are properly managed. Moreover, we also had to configure Supabase and create the database schema. There were also deployment configuration issues as we had to integrate our front-end application with our back-end to ensure that they are communicating properly. | ## Inspiration
it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks.
for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>)
this enables a new paradigm self-assembling software: machines controlling machines.
what if we could harness this to make our own lives better -- a lil LLM that works for you?
## What it does
i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf.
it's just like talking to your friend and asking them to help you out.
## How we built it
a lot of prompt engineering + few shot prompting.
## What's next for jarbls
shopping, logistics, research, etc -- possibilities are endless
* more integrations !!!
the capabilities explode exponentially with the number of integrations added
* long term memory
come by and i can give you a demo | losing |
Math is something it has always been hard for me to visualize. So being able to stretch and expand graphs with our hands seemed like a lot of fun!
Hands-on-math is a web app which allows any user with a Leap Motion device to draw graphs in the air with their hands and view the resultant function on their computer screen. Like a graphing calculator in the air!
We built it using the Leap Motion controller and javascript library and the Wolfram Alpha API to handle mathematical computations on the backend, and we're really excited to see where it may go. | ## Inspiration
We took inspiration from our experience of how education can be hard. Studies conducted by EdX show that classes that teach quantitative subjects like Mathematics and Physics tend to receive lower ratings from students in terms of engagement and educational capacity than their qualitative counterparts. Of all advanced placement tests, AP Physics 1 receives on average the lowest scores year after year, according to College Board statistics. The fact is, across the board, many qualitative subjects are just more difficult to teach, a fact that is compounded by the isolation that came with remote working, as a result of the COVID-19 pandemic. So, we would like to find a way to promote learning in a fun way.
In keeping with the theme of Ctrl + Alt + Create, we took inspiration from another educational game from the history of computing. In 1991, Microsoft released a programming language and environment called QBASIC to teach first time programmers how to code. One of the demo programs they released with this development environment was a game called Gorillas, an artillery game where two players can guess the velocity and angle in order to try to hit their opponents. We decided to re-imagine this iconic little program from the 90s into a modern networked webgame, designed to teach students kinematics and projectile motion.
## What it does
The goal of our project was to create an educational entertainment game that allows students to better engage in qualitative subjects. We wanted to provide a tool for instructors for both in-classroom and remote education and provide a way to make education more accessible for students attending remotely. Specifically, we focused on introductory high school physics, one of the most challenging subjects to tackle. Similar to Kahoot, teachers can setup a classroom or lobby for students to join in from their devices. Students can join in either as individuals, or as a team. Once a competition begins, students use virtual tape measures to find distances in their surroundings, determining how far their opponent is and the size of obstacles that they need to overcome. Based on these parameters, they can then try out an appropriate angle and calculate an initial velocity to fire their projectiles. Although there is no timer, students are incentivized to work quickly in order to fire off their projectiles before their opponents. Students have a limited number of shots as well, incentivizing them to double-check their work wisely.
## How we built it
We built this web app using HTML, CSS, and Javascript. Our team split up into a Graphics Team and Logics Team. The Logics Team implemented the Kinematics and the game components of this modern recreation of QBASIC Gorillas. The Graphics Team created designs and programmed animations to represent the game logic as well as rendering the final imagery. The two teams came together to make sure everything worked well together.
## Challenges we ran into
We ran into many challenges which include time constraints and our lack of knowledge about certain concepts. We later realized we should have spent more time on planning and designing the game before splitting into teams because it caused problems in miscommunication between the teams about certain elements of the game. Due to time constraints, we did not have time to implement a multiplayer version of the game.
## Accomplishments that we're proud of
The game logically works in single player game. We are proud that we were able to logically implement the entire game, as well as having all the necessary graphics to show its functionality.
## What we learned
We learned the intricacies of game design and game development. Most of us have usually worked with more information-based websites and software technologies. We learned how to make a webapp game from scratch. We also improved our HTML/CSS/Javascript knowledge and our concepts of MVC.
## What's next for Gorillamatics
First we would like to add networking to this game to better meet the goals of increasing connectivity in the classroom as well as sparking a love for Physics in a fun way. We would also like to have better graphics. For the long term, we are planning on adding different obstacles to make different kinematics problems. | ## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface | partial |
## 💡Inspiration
Small businesses and local entrepreneurs are an essential facet of our communities, with the COVID-19 Pandemic demonstrating how much we depend on them and vice versa. However, it can often be difficult for these businesses to know the best trajectory for their growth, with no streamlined method currently existing for gathering and analyzing their customers' feedback.
SentiView aims to change that.
## 🔍 What it does
Small businesses reply greatly on reviews; they tell them what they're doing right and what to improve. SentiView takes in a set of customer reviews and sorts positive reviews from neutral/negative reviews. Then, it finds the top five keywords that occur the most frequently in the review sets, helping business owners swiftly find the root of their problems.
SentiView takes in a set of customer reviews and uses the power of sentiment analysis in order to determine whether or not they are positive or negative. Then, each review is tokenized and cleaned based on a stoplist. The 5 most common positive and negative words are then displayed to the user; they can then select and scroll through the reviews containing the word, gathering a first-hand view of their business's needs and strengths.
## ⚙️ How we built it
We developed the application's front end using React, Bootstrap, and Tailwind. The backend used a Flask server that hosted the data generated from the Cohere API, which we used for sentiment analysis and tokenization purposes.
## 🚧 Challenges we ran into
There were a variety of challenges we ran into, primarily with regard to integrating our front and back end systems. We discovered that the Cohere API didn't play along the nicest with React, requiring us to spin up a Flask server to enable us to transfer data back and forth from the API. Setting up this server and enabling it to achieve the desired functionality was a challenge in and of itself, as none of us were particularly experienced with using it.
## ✔️ Accomplishments that we're proud of
* The team's synergy was unmatched. We all shared a passion for this project and ferociously pursued its completion to the very end.
* Successfully leveraging Cohere to perform NLP tasks and achieve our initial goals.
* Our minimalistic, yet powerful home page.
* The substantial code! It was our first time using sentiment analysis and flask, which launched some big obstacles along the way (that we powered through, of course).
## 📚 What we learned
* How to work with the Cohere API and tailor it for our specific use case.
* Some of us interacted with Tailwind for the first time at this hackathon, broadening our knowledge of relevant frameworks.
* Flask. Powerful and challenging, we learned how to link the front end with the back end.
* How to integrate a Python backend with a React frontend and send data between the layers.
* How to use Figma to generate higher-quality and tailored graphics.
## 🔭 What's next for SentiView
* SentiView would like to be able to produce graphs that display trends over time in customer satisfaction to give small businesses a way to assess their improvement.
* Provide bespoke recommendations to businesses. | ## Inspiration
Small businesses have suffered throughout the COVID-19 pandemic. In order to help them get back on track once life comes back to normal, this app can attract new and loyal customers alike to the restaurant.
## What it does
Businesses can sign up and host their restaurant online, where users can search them up, follow them, and scroll around and see their items. Owners can also offer virtual coupons to attract more customers, or users can buy each other food vouchers that can be redeemed next time they visit the store.
## How we built it
The webapp was built using Flask and Google's Firebase for the backend development. Multiple flask modules were used, such as flask\_login, flask\_bcrypt, pyrebase, and more. HTML/CSS with Jinja2 and Bootstrap were used for the View (structuring of the code followed an MVC model).
## Challenges we ran into
-Restructuring of the project:
Sometime during Saturday, we had to restructure the whole project because we ran into a circular dependency, so the whole structure of the code changed making us learn the new way of deploying our code
-Many 'NoneType Object is not subscriptable', and not attributable errors
Getting data from our Firebase realtime database proved to be quite difficult at times, because there were many branches, and each time we would try to retrieve values we ran into the risk of getting this error. Depending on the type of user, the structure of the database changes but the users are similarly related (Business inherits from Users), so sometimes during login/registration the user type wouldn't be known properly leading to NoneType object errors.
-Having pages different for each type of user
This was not as much of a challenge as the other two, thanks to the help of Jinja2. However, due to the different pages for different users, sometimes the errors would return (like names returning as None, because the user types would be different).
## Accomplishments that we're proud of
-Having a functional search and login/registration system
-Implementing encryption with user passwords
-Implementing dynamic URLs that would show different pages based on Business/User type
-Allowing businesses to add items on their menu, and uploading them to Firebase
-Fully incorporating our data and object structures in Firebase,
## What we learned
Every accomplishment is something we have learned. These are things we haven't implemented before in our projects. We learned how to use Firebase with Python, and how to use Flask with all of its other mini modules.
## What's next for foodtalk
Due to time constraints, we still have to implement businesses being able to post their own posts. The coupon voucher and gift receipt system have yet to be implemented, and there could be more customization for users and businesses to put on their profile, like profile pictures and biographies. | ## Inspiration
It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in.
## What it does
Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site.
## How we built it
We built it using:
* AWS
+ DynamoDB
+ Lambda
+ Cognito
+ APIGateway
+ Amplify
* React
+ Redux
+ React-Dom
+ MaterialUI
* serverless
* Twilio
* Domain.com
* Netlify
## Challenges we ran into
Centring divs should not be so difficult :(
Transferring the name servers from domain.com to Netlify
Serverless deploying with dependencies
## Accomplishments that we're proud of
Our logo!
It works :)
## What we learned
We learned how to host a domain and we improved our front-end html/css skills
## What's next for You're Doing Great
We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message. | losing |
## Inspiration
Social-distancing is hard, but little things always add up.
What if person X is standing too close to person Y in the c-mart, and then person Y ends up in the hospital for more than a month battling for their lives? Not finished, that c-mart gets shut down for contaminated merchandise.
All this happened because person X didn't step back.
These types of scenarios, and in hope of going back to normal lives, pushed me to create **Calluna**.
## What Calluna does
Calluna is aimed to be an apple watch application. On the application, you can check out all the notifications you've gotten that day as well as when you've got it and your settings.
When not on the app, you get pinged when your too close to someone who also has the app, making this a great feature for business workforces.
## How Calluna was built
Calluna was very simply built using Figma. I have linked below both design and a fully-fuctionally prototype!
## Challenges we ran into
I had some issues with ideation. I needed something that was useful, simple, and has growth potential. I also had some headaches on the first night that could possibly be due to sleep deprivation and too much coffee that ended up making me sleep till the next morning.
## Accomplishments that we're proud of
I love the design! I feel like this is a project that will be really helpful *especially* during the COVID-19 pandemic.
## What we learned
I learned how to incorporate fonts to accent the color and scene, as well as working with such small frames and how to make it look easy on the eyes!
## What's next for Calluna
I hope to create and publish the ios app with GPS integration, then possibly android too. | ## Inspiration
Our inspiration comes from many of our own experiences with dealing with mental health and self-care, as well as from those around us. We know what it's like to lose track of self-care, especially in our current environment, and wanted to create a digital companion that could help us in our journey of understanding our thoughts and feelings. We were inspired to create an easily accessible space where users could feel safe in confiding in their mood and check-in to see how they're feeling, but also receive encouraging messages throughout the day.
## What it does
Carepanion allows users an easily accessible space to check-in on their own wellbeing and gently brings awareness to self-care activities using encouraging push notifications. With Carepanion, users are able to check-in with their personal companion and log their wellbeing and self-care for the day, such as their mood, water and medication consumption, amount of exercise and amount of sleep. Users are also able to view their activity for each day and visualize the different states of their wellbeing during different periods of time. Because it is especially easy for people to neglect their own basic needs when going through a difficult time, Carepanion sends periodic notifications to the user with messages of encouragement and assurance as well as gentle reminders for the user to take care of themselves and to check-in.
## How we built it
We built our project through the collective use of Figma, React Native, Expo and Git. We first used Figma to prototype and wireframe our application. We then developed our project in Javascript using React Native and the Expo platform. For version control we used Git and Github.
## Challenges we ran into
Some challenges we ran into included transferring our React knowledge into React Native knowledge, as well as handling package managers with Node.js. With most of our team having working knowledge of React.js but being completely new to React Native, we found that while some of the features of React were easily interchangeable with React Native, some features were not, and we had a tricky time figuring out which ones did and didn't. One example of this is passing props; we spent a lot of time researching ways to pass props in React Native. We also had difficult time in resolving the package files in our application using Node.js, as our team members all used different versions of Node. This meant that some packages were not compatible with certain versions of Node, and some members had difficulty installing specific packages in the application. Luckily, we figured out that if we all upgraded our versions, we were able to successfully install everything. Ultimately, we were able to overcome our challenges and learn a lot from the experience.
## Accomplishments that we're proud of
Our team is proud of the fact that we were able to produce an application from ground up, from the design process to a working prototype. We are excited that we got to learn a new style of development, as most of us were new to mobile development. We are also proud that we were able to pick up a new framework, React Native & Expo, and create an application from it, despite not having previous experience.
## What we learned
Most of our team was new to React Native, mobile development, as well as UI/UX design. We wanted to challenge ourselves by creating a functioning mobile app from beginning to end, starting with the UI/UX design and finishing with a full-fledged application. During this process, we learned a lot about the design and development process, as well as our capabilities in creating an application within a short time frame.
We began by learning how to use Figma to develop design prototypes that would later help us in determining the overall look and feel of our app, as well as the different screens the user would experience and the components that they would have to interact with. We learned about UX, and how to design a flow that would give the user the smoothest experience. Then, we learned how basics of React Native, and integrated our knowledge of React into the learning process. We were able to pick it up quickly, and use the framework in conjunction with Expo (a platform for creating mobile apps) to create a working prototype of our idea.
## What's next for Carepanion
While we were nearing the end of work on this project during the allotted hackathon time, we thought of several ways we could expand and add to Carepanion that we did not have enough time to get to. In the future, we plan on continuing to develop the UI and functionality, ideas include customizable check-in and calendar options, expanding the bank of messages and notifications, personalizing the messages further, and allowing for customization of the colours of the app for a more visually pleasing and calming experience for users. | ## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on. | partial |
## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time. | ## Inspiration
Our project, "**Jarvis**," was born out of a deep-seated desire to empower individuals with visual impairments by providing them with a groundbreaking tool for comprehending and navigating their surroundings. Our aspiration was to bridge the accessibility gap and ensure that blind individuals can fully grasp their environment. By providing the visually impaired community access to **auditory descriptions** of their surroundings, a **personal assistant**, and an understanding of **non-verbal cues**, we have built the world's most advanced tool for the visually impaired community.
## What it does
"**Jarvis**" is a revolutionary technology that boasts a multifaceted array of functionalities. It not only perceives and identifies elements in the blind person's surroundings but also offers **auditory descriptions**, effectively narrating the environmental aspects they encounter. We utilize a **speech-to-text** and **text-to-speech model** similar to **Siri** / **Alexa**, enabling ease of access. Moreover, our model possesses the remarkable capability to recognize and interpret the **facial expressions** of individuals who stand in close proximity to the blind person, providing them with invaluable social cues. Furthermore, users can ask questions that may require critical reasoning, such as what to order from a menu or navigating complex public-transport-maps. Our system is extended to the **Amazfit**, enabling users to get a description of their surroundings or identify the people around them with a single press.
## How we built it
The development of "**Jarvis**" was a meticulous and collaborative endeavor that involved a comprehensive array of cutting-edge technologies and methodologies. Our team harnessed state-of-the-art **machine learning frameworks** and sophisticated **computer vision techniques** to get analysis about the environment, like , **Hume**, **LlaVa**, **OpenCV**, a sophisticated computer vision techniques to get analysis about the environment, and used **next.js** to create our frontend which was established with the **ZeppOS** using **Amazfit smartwatch**.
## Challenges we ran into
Throughout the development process, we encountered a host of formidable challenges. These obstacles included the intricacies of training a model to recognize and interpret a diverse range of environmental elements and human expressions. We also had to grapple with the intricacies of optimizing the model for real-time usage on the **Zepp smartwatch** and get through the **vibrations** get enabled according to the **Hume** emotional analysis model, we faced issues while integrating **OCR (Optical Character Recognition)** capabilities with the **text-to speech** model. However, our team's relentless commitment and problem-solving skills enabled us to surmount these challenges.
## Accomplishments that we're proud of
Our proudest achievements in the course of this project encompass several remarkable milestones. These include the successful development of "**Jarvis**" a model that can audibly describe complex environments to blind individuals, thus enhancing their **situational awareness**. Furthermore, our model's ability to discern and interpret **human facial expressions** stands as a noteworthy accomplishment.
## What we learned
# Hume
**Hume** is instrumental for our project's **emotion-analysis**. This information is then translated into **audio descriptions** and the **vibrations** onto **Amazfit smartwatch**, providing users with valuable insights about their surroundings. By capturing facial expressions and analyzing them, our system can provide feedback on the **emotions** displayed by individuals in the user's vicinity. This feature is particularly beneficial in social interactions, as it aids users in understanding **non-verbal cues**.
# Zepp
Our project involved a deep dive into the capabilities of **ZeppOS**, and we successfully integrated the **Amazfit smartwatch** into our web application. This integration is not just a technical achievement; it has far-reaching implications for the visually impaired. With this technology, we've created a user-friendly application that provides an in-depth understanding of the user's surroundings, significantly enhancing their daily experiences. By using the **vibrations**, the visually impaired are notified of their actions. Furthermore, the intensity of the vibration is proportional to the intensity of the emotion measured through **Hume**.
# Ziiliz
We used **Zilliz** to host **Milvus** online, and stored a dataset of images and their vector embeddings. Each image was classified as a person; hence, we were able to build an **identity-classification** tool using **Zilliz's** reverse-image-search tool. We further set a minimum threshold below which people's identities were not recognized, i.e. their data was not in **Zilliz**. We estimate the accuracy of this model to be around **95%**.
# Github
We acquired a comprehensive understanding of the capabilities of version control using **Git** and established an organization. Within this organization, we allocated specific tasks labeled as "**TODO**" to each team member. **Git** was effectively employed to facilitate team discussions, workflows, and identify issues within each other's contributions.
The overall development of "**Jarvis**" has been a rich learning experience for our team. We have acquired a deep understanding of cutting-edge **machine learning**, **computer vision**, and **speech synthesis** techniques. Moreover, we have gained invaluable insights into the complexities of real-world application, particularly when adapting technology for wearable devices. This project has not only broadened our technical knowledge but has also instilled in us a profound sense of empathy and a commitment to enhancing the lives of visually impaired individuals.
## What's next for Jarvis
The future holds exciting prospects for "**Jarvis.**" We envision continuous development and refinement of our model, with a focus on expanding its capabilities to provide even more comprehensive **environmental descriptions**. In the pipeline are plans to extend its compatibility to a wider range of **wearable devices**, ensuring its accessibility to a broader audience. Additionally, we are exploring opportunities for collaboration with organizations dedicated to the betterment of **accessibility technology**. The journey ahead involves further advancements in **assistive technology** and greater empowerment for individuals with visual impairments. | ## Inspiration
As busy university students with multiple commitments on top of job hunting, we are all too familiar with the tedium and frustration associated with having to compose cover letters for the few job openings that do require them. Given that much of cover letter writing is simply summarizing one's professional qualifications to tie in with company specific information, we have decided to exploit the formulaic nature of such writing and create a web application to generate cover letters with minimal user inputs.
## What it does
hire-me-pls is a web application that obtains details of the user’s qualifications from their LinkedIn profile, performs research on the provided target company, and leverages these pieces of information to generate a customized cover letter.
## How we built it
For our front end, we utilized JavaScript with React, leveraging the Tailwind CSS framework for the styling of the site. We designed the web application such that once we have obtained the user’s inputs (LinkedIn profile url, name of target company), we send these inputs to the backend.
In the backend, built in Python with Fast API, we extract relevant details from the provided LinkedIn profile using the Prospeo API, extracted relevant company information by querying with the Metaphor API, and finally feeding these findings into Open AI to generate a customized cover letter for our user.
## Challenges we ran into
In addition to the general bugs and unexpected delays that comes with any project of this scale, our team was challenged with finding a suitable API for extracting relevant data from a given LinkedIn profile. Since much of the tools available on the market are targeted towards recruiters, their functionalities and pricing are often incompatible with our requirements for this web application. After spending a surprising amount of time on research, we settled on Prospeo, which returns data in the convenient JSON format, provides fast consistent responses, and offers a generous free tier option that we could leverage.
Another challenge we have encountered were the CORS issues that arose when we first tried making requests to the Prospeo API from the front end. After much trial and error, we finally resolved these issues by moving all of our API calls to the backend of our application.
## Accomplishments that we're proud of
A major hurdle that we are proud to have overcome throughout the development process is the fact that half of our team of hackers are beginners (where PennApps is their very first hackathon). Through thoughtful delegation of tasks and the patient mentorship of the more seasoned programmers on the team, we were able to achieve the high productivity necessary for completing this web application within the tight deadline.
## What we learned
Through building hire-me-pls, we have gained a greater appreciation for what is achievable when we strategically combine different API’s and AI tools to build off each other. In addition, the beginners on the team gained not only valuable experience contributing to a complex project in a fast-paced environment, but also exposure to useful web development tools that they can use in future personal projects.
## What's next for hire-me-pls
While hire-me-pls already achieves much of our original vision, we recognize that there are always ways to make a good thing better.
In refining hire-me-pls, we aim to improve the prompt that we provide to Open AI to achieve cover letters that are even more concise and specific to the users’ qualifications and their companies of interest.
Further down the road, we would like to explore the possibility of tailoring cover letters to specific roles/job postings at a given company, providing a functionality to generate cold outreach emails to recruiters, and finally, ways of detecting how likely an anti-AI software would detect a hire-me-pls output as being AI generated. | winning |
# SpeakEasy
## Overview
SpeakEasy: AI Language Companion
Visiting another country but don't want to sound like a robot? Want to learn a new language but can't get your intonation to sound like other people's? SpeakEasy can make you sound like, well, you!
## Features
SpeakEasy is an AI language companion which centers around localizing your own voice into other languages.
If, for example, you wanted to visit another country but didn't want to sound like a robot or Google Translate, you could still talk in your native language. SpeakEasy can then automatically repeat each statement in the target language in exactly the intonation you would have if you spoke that language.
Say you wanted to learn a new language but couldn't quite get your intonation to sound like the source material you were learning from. SpeakEasy is able to provide you phrases in your own voice so you know exactly how your intonation should sound.
## Background
SpeakEasy is the product of a group of four UC Berkeley students. For all of us, this is our first submission to a hackathon and the result of several years of wanting to get together to create something cool together. We are excited to present every part of SpeakEasy; from the remarkably accurate AI speech to just how much we've all learned about rapidly developed software projects.
### Inspiration
Our group started by thinking of ways we could make an impact. We then expanded our search to include using and demonstrating technologies developed by CalHacks' generous sponsors, as we felt this would be a good way to demonstrate how modern technology can be used to help everyday people.
In the end, we decided on SpeakEasy and used Cartesia to realize many of the AI-powered functions of the application. This enabled us to make something which addresses a specific real-world problem (robotic-sounding translations) many of us have either encountered or are attempting to avoid.
### Challenges
Our group has varying levels of software development experience, and especially given our limited hackathon experience (read: none), there were many challenging steps. For example: deciding on project scope, designing high-level architecture, implementing major features, and especially debugging.
What was never a challenge, however, was collaboration. We worked quite well as a team and had a good time doing it.
### Accomplishments / Learning
We are proud to say that despite the many challenges we accomplished a great deal with this project. We have a fully functional Flask backend with React frontend (see "Technical Details") which uses multiple different APIs. This project successfully ties together audio processing, asynchonrous communication, artificial intelligence, UI/UX design, database management, and so much more. What's more is that many of our group members learned this from base fundamentals.
## Technical Details
As mentioned in an earlier section, SpeakEasy is designed with a Flask (Python) backend and React (JavaScript) frontent. This is a very standard setup that is used often at hackathons due to its easy implementation and relatively limited required setup. Flask only requires two lines of code to make an entirely new endpoint, while React can make a full audio-playing page with callbacks that looks absolutely beautiful in less than an hour. For storing data, we use SQLAlchemy (backed by SQLite).
1. When a user opens SpeakEasy, they are first sent to a landing page.
2. After pressing any key, they are taken to a training screen. Here they will record a 15-20 second message (ideally the one shown on screen) which will be used to create an embedding. This is accomplished with the Cartesia "Clone Voice from Clip" endpoint. A Cartesia Voice (abbreviated as "Voice") is created from the returned embedding (using the "Create Voice" endpoint) which contains a Voice ID. This Voice ID is used to uniquely identify each voice, which itself is in a specific language. The database then stores this voice and creates a new user which this voice is associated with.
3. When the recording is complete and the user clicks "Next", they will be taken to a split screen where they can choose between the two main program functions of SpeakEasy.
4. If the user clicks on the vocal translation route, they will be brought to another recording screen. Here, they record a sound in English which is then sent to the backend. The backend encodes this MP3 data into PCM, sends it to a speech-to-text API, and then transfers it into a text translation API. Separately, the backend trains a new Voice (using the Cartesia Localize Voice endpoint, wrapped by get/create Voice since Localize requires an embedding instead of a Voice ID) with the intended target language and uses the Voice ID it returns. The backend then sends the translated text to the Cartesia "Text to Speech (Bytes)" endpoint using this new Voice ID. This is then played back to the user as a response to the original backend request. All created Voices are stored in the database and associated with the current user. This is done so returning users do not have to retrain their voices in any language.
5. If the user clicks on the language learning route, they will be brought to a page which displays a randomly selected phrase in a certain language. It will then query the Cartesia API to pronounce that phrase in that language, using the preexisting Voice ID if available (or prompting to record a new phrase if not). A request is made to the backend to input some microphone input, which is then compared to Cartesia's estimation of your speech in a target language. The backend then returns a set of feedback using the difference between the two pronounciations, and displays that to the user on the frontend.
6. After each route is selected, the user may choose to go back and select either route (the same route again or the other route).
## Cartesia Issues
We were very impressed with Cartesia and its abilities, but noted a few issues which would improve the development experience.
* Clone Voice From Clip endpoint documentation
+ The documentation for the endpoint in question details a `Response` which includes a variety of fields: `id`, `name`, `language`, and more. However, the endpoint only returns the embedding in a dictonary. It is then required to send the embedding into the "Create Voice" endpoint to create an `id` (and other fields), which are required for some further endpoints.
* Clone Voice From Clip endpoint length requirements
+ The clip supplied to the endpoint in question appears to require a duration of greater than a second or two. Se "Error reporting" for further details.
* Text to Speech (Bytes) endpoint output format
+ The TTS endpoint requires an output format be specified. This JSON object notably lacks an `encoding` field in the MP3 configuration which is present for the other formats (raw and WAV). The solution to this is to send an `encoding` field with the value for one of the other two formats, despite this functionally doing nothing.
* Embedding format
+ The embedding is specified as a list of 192 numbers, some of which may be negative. Python's JSON parser does not like the dash symbol and frequently encounters issues with this. If possible, it would be good to either allow this encoding to be base64 encoded, hashed, or something else to prevent negatives. Optimally embeddings do not have negatives, though this seems difficult to realize.
* Response code mismatches
+ Some response codes returned from endpoints do not match their listed function. For example, a response code of 405 should not be returned when there is a formatting error in the request. Similarly, 400 is returned before 404 when using invalid endpoints, making it difficult to debug. There are several other instances of this but we did not collate a list.
* Error reporting
+ If (most) endpoints return in JSON format, errors should also be turned in JSON format. This prevents many parsing issues and would simplify design. In addition, error messages are too vague to glean any useful information. For example, 500 is always "Bad request" regardless of the underlying error cause. This is the same thing as the error name.
## Future Improvements
In the future, it would be interesting to investigate the following:
* Proper authentication
* Cloud-based database storage (with redundancy)
* Increased error checking
* Unit and integration test coverage, with CI/CD
* Automatic recording quality analysis
* Audio streaming (instead of buffering) using WebSockets
* Mobile device compatibility
* Reducing audio processing overhead | ## Inspiration
We wanted a low-anxiety tool to boost our public speaking skills. With an ever-accelerating shift of communication away from face-to-face and towards pretty much just memes, it's becoming difficult for younger generations to express themselves or articulate an argument without a screen as a proxy.
## What does it do?
DebateABot is a web-app that allows the user to pick a topic and make their point, while arguing against our chat bot.
## How did we build it?
Our website is boot-strapped with Javascript/JQuery and HTML5. The user can talk to our web app which used NLP to convert speech to text, and sends the text to our server, which was built with PHP and background written in python. We perform key-word matching and search result ranking using the indico API, after which we run Sentiment Analysis on the text. The counter-argument, as a string, is sent back to the web app and is read aloud to the user using the Mozilla Web Speech API
## Some challenges we ran into
First off, trying to use the Watson APIs and the Azure APIs lead to a lot of initial difficulties trying to get set up and get access. Early on we also wanted to use our Amazon Echo that we have, but reached a point where it wasn't realistic to use AWS and Alexa skills for what we wanted to do. A common theme amongst other challenges has simply been sleep deprivation; staying up past 3am is a sure-fire way to exponentiate your rate of errors and bugs. The last significant difficulty is the bane of most software projects, and ours is no exception- integration.
## Accomplishments that we're proud of
The first time that we got our voice input to print out on the screen, in our own program, was a big moment. We also kicked ass as a team! This was the first hackathon EVER for two of our team members, and everyone had a role to play, and was able to be fully involved in developing our hack. Also, we just had a lot of fun together. Spirits were kept high throughout the 36 hours, and we lasted a whole day before swearing at our chat bot. To our surprise, instead of echoing out our exclaimed profanity, the Web Speech API read aloud "eff-asterix-asterix-asterix you, chat bot!" It took 5 minutes of straight laughing before we could get back to work.
## What we learned
The Mozilla Web Speech API does not swear! So don't get any ideas when you're talking to our innocent chat bot...
## What's next for DebateABot?
While DebateABot isn't likely to evolve into the singularity, it definitely has the potential to become a lot smarter. The immediate next step is to port the project over to be usable with Amazon Echo or Google Home, which eliminates the need for a screen, making the conversation more realistic. After that, it's a question of taking DebateABot and applying it to something important to YOU. Whether that's a way to practice for a Model UN or practice your thesis defence, it's just a matter of collecting more data.
<https://www.youtube.com/watch?v=klXpGybSi3A> | *Everything in this project was completed during TreeHacks.*
*By the way, we've included lots of hidden fishy puns in our writeup! Comment how many you find!*
## TL; DR
* Illegal overfishing is a massive issue (**>200 billion fish**/year), disrupting global ecosystems and placing hundreds of species at risk of extinction.
* Satellite imagery can detect fishing ships but there's little positive data to train a good ML model.
* To get synthetic data: we fine-tuned Stable Diffusion on **1/1000ths of the data** of a typical GAN (and 10x training speed) on images of satellite pictures of ships and achieved comparable quality to SOTA. We only used **68** original images!
* We trained a neural network using our real and synthetic data that detected ships with **96%** accuracy.
* Built a global map and hotspot dashboard that lets governments view realtime satellite images, analyze suspicious activity hotspots, & take action.
* Created a custom polygon renderer on top of ArcGIS
* Our novel Stable Diffusion data augmentation method has potential for many other low-data applications.
Got you hooked? Keep reading!
## Let's get reel...
Did you know global fish supply has **decreased by [49%](https://www.scientificamerican.com/article/ocean-fish-numbers-cut-in-half-since-1970/)** since 1970?
While topics like deforestation and melting ice dominate sustainability headlines, overfishing is a seriously overlooked issue. After thoroughly researching sustainability, we realized that this was an important but under-addressed challenge.
We were shocked to learn that **[90%](https://datatopics.worldbank.org/sdgatlas/archive/2017/SDG-14-life-below-water.html) of fisheries are over-exploited** or collapsing. What's more, around [1 trillion](https://www.forbes.com/sites/michaelpellmanrowland/2017/07/24/seafood-sustainability-facts/?sh=2a46f1794bbf) (1,000,000,000,000) fish are caught yearly.
Hailing from San Diego, Boston, and other cities known for seafood, we were shocked to hear about this problem. Research indicates that despite many verbal commitments to fish sustainably, **one in five fish is illegally caught**. What a load of carp!
### People are shellfish...
Around the world, governments and NGOs have been trying to reel in overfishing, but economic incentives and self-interest mean that many ships continue to exploit resources secretly. It's hard to detect small ships on the world's 140 million square miles of ocean.
## What we're shipping
In short (we won't keep you on the hook): we used custom Stable Diffusion to create realistic synthetic image data of ships and trained a convolutional neural networks (CNNs) to detect and locate ships from satellite imagery. We also built a **data visualization platform** for stakeholders to monitor overfishing. To enhance this platform, we **identified several hotspots of suspicious dark vessel activity** by digging into 55,000+ AIS radar records.
While people have tried to build AI models to detect overfishing before, accuracy was poor due to high class imbalance. There are few positive examples of ships on water compared to the infinite negative examples of patches of water without ships. Researchers have used GANs to generate synthetic data for other purposes. However, it takes around **50,000** sample images to train a decent GAN. The largest satellite ship dataset only has ~2,000 samples.
We realized that Stable Diffusion (SD), a popular text-to-image AI model, could be repurposed to generate unlimited synthetic image data of ships based on relatively few inputs. We were able to achieve highly realistic synthetic images using **only 68** original images.
## How we shipped it
First, we read scientific literature and news articles about overfishing, methods to detect overfishing, and object detection models (and limitations). We identified a specific challenge: class imbalance in satellite imagery.
Next, we split into teams. Molly and Soham worked on the front-end, developing a geographical analysis portal with React and creating a custom polygon renderer on top of existing geospatial libraries. Andrew and Sayak worked on curating satellite imagery from a variety of datasets, performing classical image transformations (rotations, flips, crops), fine-tuning Stable Diffusion models and GANs (to compare quality), and finally using a combo of real and synthetic data to train an CNN. Andrew also worked on design, graphics, and AIS data analysis. We explored Leap ML and Runway fine-tuning methods.
## Challenges we tackled
Building Earth visualization portals are always quite challenging, but we could never have predicted the waves we would face. Among animations, rotations, longitude, latitude, country and ocean lines, and the most-feared WebGL, we had a lot to learn. For ocean lines, we made an API call to a submarine transmissions library and recorded features to feed into a JSON. Inspired by the beautiful animated globes of Stripe's and CoPilot's landing pages alike, we challengingly but succeedingly wrote our own.
Additionally, the synthesis between globe to 3D map was difficult, as it required building a new scroll effect compatible with the globe. These challenges, although significant at the time, were ultimately surmountable, as we navigated through their waters unforgivingly. This enabled the series of accomplishments that ensued.
It was challenging to build a visual data analysis layer on top of the ArcGIS library. The library was extremely granular, requiring us to assimilate the meshes of each individual polygon to display. To overcome this, we built our own component-based layer that enabled us to draw on top of a preexisting map.
## Making waves (accomplishments)
Text-to-image models are really cool but have failed to find that many real-world use cases besides art and profile pics. We identified and validated a relevant application for Stable Diffusion that has far-reaching effects for agricultural, industry, medicine, defense, and more.
We also made a sleek and refined web portal to display our results, in just a short amount of time. We also trained a CNN to detect ships using the real and synthetic data that achieved 96% accuracy.
## What we learned
### How to tackle overfishing:
We learned a lot about existing methods to combat overfishing that we didn't know about. We really became more educated on ocean sustainability practices and the pressing nature of the situation. We schooled ourselves on AIS, satellite imagery, dark vessels, and other relevant topics.
### Don't cast a wide net. And don't go overboard.
Originally, we were super ambitious with what we wanted to do, such as implementing Monte Carlo particle tracking algorithms to build probabilistic models of ship trajectories. We realized that we should really focus on a couple of ideas at max because of time constraints.
### Divide and conquer
We also realized that splitting into sub-teams of two to work on specific tasks and being clear about responsibilities made things go very smoothly.
### Geographic data visualization
Building platforms that enable interactions with maps and location data.
## What's on the horizon (implications + next steps)
Our Stable Diffusion data augmentation protocol has implications for few-shot learning of any object for agricultural, defense, medical and other applications. For instance, you could use our method to generate synthetic lung CT-Scan data to train cancer detection models or fine-tune a model to detect a specific diseased fruit not covered by existing general-purpose models.
We plan to create an API that allows anyone to upload a few photos of a specific object. We will build a large synthetic image dataset based off of those objects and train a plug-and-play CNN API that performs object location, classification, and counting.
While general purpose object detection models like YOLO work well for popular and broad categories like "bike" or "dog", they aren't feasible for specific detection purposes. For instance, if you are a farmer trying to use computer vision to detect diseased lychees. Or a medical researcher trying to detect cancerous cells from a microscope slide. Our method allows anyone to obtain an accurate task-specific object detection model. Because one-size-fits-all doesn't cut it.
We're excited to turn the tide with our fin-tech!
*How many fish/ocean-related puns did you find?* | partial |
## Inspiration
Nowadays people have very little time (or so it seems). In addition, bank hours are often inconvenient for those who work full-time. We address these problems by connecting financial advisors to their clients directly via a secure video chat.
## What it does
Verifies user identity using facial recognition.
The user can see their banking dashboard.
* Schedule appointments with their financial advisor
* Text, voice, and video call with their advisor via the web app
## How we built it
We used a basic HTML/javascript/CSS website as a template. We used google firebase to store user images. We used Microsoft Azure to host our site. We used Microsoft face API to analyze faces and verify a users identity against a saved image of them.
## Challenges we ran into
Wow, there were lots! We started with an angular application but that didn't go too well. Most APIS were better suited for javascript (not typescript). It was also difficult to write files straight from javascript to google firebase.
## Accomplishments that we're proud of
It looks pretty good! At our last hackathon, we had a java gui so this is definitely a step up in terms of UI.
## What we learned
We all learned javascript and some HTML. Chris learned how to use APIs.
## What's next for McAdvisor
We're thinking of pitching it to the big-five (maybe Goldman).
On a more short-term note, adding voice recognition. Take advantage of all firebases functions. Integrate online signatures for forms. The list goes on and on! | ## Inspiration
This webapp is made by three MIT girls, who often find themselves struggling with homework. We thought it'd be great to have a platform that connects people with similar academic interests. Psetting is MIT colloquial for doing homework, and as we explain to people who are interested, this webapp is like **"Tinder" for psetting**!
## What it does
Our webapp asks for basic information regarding user's coursework and study habits, and runs a machine learning algorithm to help the user find his/her best match as pset buddies.
## How we built it
We built our webpage using html. Server is in Flask, and our machine learning algorithm is implemented in python using the Numpy and Pandas libraries.
## Challenges we ran into
No one in our team has ever had web development experience, so we had to learn all basic details (i.e. everything except Python) from scratch. Solving real life problems can be more confounding than textbook examples. In our case, we had to deal with incomplete information, human input variations, and processing big bulks of data within reasonable time.
## Accomplishments that we're proud of
We are proud that we started with no experience and no project in mind and finished off with a functional web app in 24 hours.
## What we learned
html, Flask, components of a web app (e.g. server), database.
## What's next for PSetBuddies
We hope that our app will be widely used within MIT communities and beyond. We will also make our project into a real web app by hosting it online and transforming it into a mobile app. | ## Inspiration
I've been previously locked out of my own car before with no spare key or way of getting in far from home.
## What it does
Knock Lock is an alternative way to unlock your car if you don't have your keys or don't want to break into your own vehicle.
## How we built it
A Piezo speaker listens for knocks based on the number of knocks and the time interval between each knock. This is run on an Arduino R3 microcontroller which is connected to a relay in the car that is responsible for the power locks. If the sequence of knocks is correct, then the Arduino will send an electrical pulse to the relay that will unlock the doors.
## Challenges we ran into
The main issue we had with the project was designing and soldering the electrical circuit running from the car, the speaker, and Arduino. Another issue we had was wiring and working on the car in the cold.
## Accomplishments that we're proud of
We are proud of managing to get the Arduino to interact with the car based on input from the speaker.
## What we learned
As this is our first hardware based project, we learned a great deal in how electrical components work and being able to interact with a car's electrical systems on a rudimentary level.
## What's next for Knock Lock
We wish to wire the controller to the car's ignition to be able to not only unlock the car based on the correct knock, but also start the car simultaneously. | losing |
Pokemon is a childhood favourite for the 90s kids.
What better way to walk down memory lane (and possibly procrasinate on whatever obligations you have) by looking up your all-time favourite pokemon? This virtual pokedex includes its pokemon type, its base statistics, its attack abilities, an accompanying picture and much more.
Gotta catch'em all! | ## Inspiration
Our Inspiration for this project was discord bots as we always wanted to make one that anyone have fun using. We also wanted to implement our knowledge of API calls which we learned from our courses to get a real understanding of API calls. We also loved Pokemon and decided creating a discord bot that allows people to engage themselves in pokemon adventures would be something we would love to create.
## What it does
The PokemonBot allows one to start a pokemon adventure. By choosing a stater Pokemon (Charmander, Squirtle, or Bulbasaur), the player starts their adventure. The pokemon and stats are saved to a particular player id, and everyone of that particular server is saved to a particular guild id. With the player's starter pokemon, they can hunt in 8 different habitats for new pokemon. They can fight or capture wild rare pokemon and add it to their collection. The Pokemon Bot also gives details on the stats on pokemon hp and opponent hp when battling. Overall, the bot allows discord users to engage themselves in a pokemon game that all members of a server can play.
## How we built it
We built it using python on visual studio code. We also utilized a couple of libraries such as openpyxl, pokebase and discord.py to make this project.
## Challenges we ran into
We ran into a couple of challenges when it came to designing the battle feature as there were many errors and difficult parts to it that we didn't really understand. After collaborative work, we were able to understand the flaws in our code and fix the bugs the battle feature was going through to make it work. The overall project was relatively complex, as it had us experience a whole new field of programming and work with API calls heavily. It was a new experience for us which made it super challenging, but this taught us so much about APIs and working with discord bots.
## Accomplishments that we're proud of
We are proud of the overall product we have developed as the bot works as we intended it to, which is our biggest achievement. We are also proud of how well the bot works on discord and how simple it is for anyone to play with the PokemonBot.
## What we learned
We learned how to work with new libraries like the openpyxl, pokebase and discord.py as this was a new experience for us. Mainly, we learned to work with a lot of API calls as the project data was dependent on the Pokemon API. We also learned important collaboration tacts to effectively work together, and test and debug problems in the code.
## What's next for Pokemon Discord Bot
The next step is to add tournaments and the feature for players to be able to battle each other. We hope to implement multiplayer, as playing solo is fun, but we want people to engage with other people in a discord server. We hope to implement various forms of multiplayer like tournaments, pokemon gyms, battles, etc, where discord users can challenge other discord users and have fun.
Join our bot
<https://discord.gg/BeFWgp9w> | # Relive and Relearn
*Step foot into a **living photo album** – a window into your memories of your time spent in Paris.*
## Inspiration
Did you know that 70% of people worldwide are interested in learning a foreign language? However, the most effective learning method, immersion and practice, is often challenging for those hesitant to speak with locals or unable to find the right environment. We sought out to try and solve this problem by – even for experiences you yourself may not have lived; While practicing your language skills and getting personalized feedback, enjoy the ability to interact and immerse yourself in a new world!
## What it does
Vitre allows you to interact with a photo album containing someone else’s memories of their life! We allow you to communicate and interact with characters around you in those memories as if they were your own. At the end, we provide tailored feedback and an AI backed DELF (Diplôme d'Études en Langue Française) assessment to quantify your French capabilities. Finally, it allows for the user to make learning languages fun and effective; where users are encouraged to learn through nostalgia.
## How we built it
We built all of it on Unity, using C#. We leveraged external API’s to make the project happen.
When the user starts speaking, we used ChatGPT’s Whisper API to transform speech into text.
Then, we fed that text into co:here, with custom prompts so that it could role play and respond in character.
Meanwhile, we are checking the responses by using co:here rerank to check on the progress of the conversation, so we knew when to move on from the memory.
We store all of the conversation so that we can later use co:here classify to give the player feedback on their grammar, and give them a level on their french.
Then, using Eleven Labs, we converted co:here’s text to speech and played it for the player to simulate a real conversation.
## Challenges we ran into
VR IS TOUGH – but incredibly rewarding! None of our team knew how to use Unity VR and the learning curve sure was steep. C# was also a tricky language to get our heads around but we pulled through! Given that our game is multilingual, we ran into challenges when it came to using LLMs but we were able to use and prompt engineering to generate suitable responses in our target language.
## Accomplishments that we're proud of
Figuring out how to build and deploy on Oculus Quest 2 from Unity
Getting over that steep VR learning curve – our first time ever developing in three dimensions
Designing a pipeline between several APIs to achieve desired functionality
Developing functional environments and UI for VR
## What we learned
* 👾 An unfathomable amount of **Unity & C#** game development fundamentals – from nothing!
* 🧠 Implementing and working with **Cohere** models – rerank, chat & classify
* ☎️ C# HTTP requests in a **Unity VR** environment
* 🗣️ **OpenAI Whisper** for multilingual speech-to-text, and **ElevenLabs** for text-to-speech
* 🇫🇷🇨🇦 A lot of **French**. Our accents got noticeably better over the hours of testing.
## What's next for Vitre
* More language support
* More scenes for the existing language
* Real time grammar correction
* Pronunciation ranking and rating
* Change memories to different voices
## Credits
We took inspiration from the indie game “Before Your Eyes”, we are big fans! | losing |
# waitER
One of the biggest causes of crowding and wait times in emergency departments is “boarding,” a term for holding an admitted patient in the ER for hours or even days until an inpatient bed becomes available. Long wait times can affect patient outcomes in various ways: patients may get tired of waiting and leave without receiving medical treatment, especially when resources of the emergency department are overwhelmed. In order to alleviate some of the anxiety and stress associated with the process, we created a web application that aims to bring transparency to patient wait times in the emergency room.
---
### Features
1. **Homepage**: Upon arrival at the ER, the patient recieves a unique user ID used to access their personal homepage, mainly featuring the placement they are currently in the queue along with other supplementary educational information that may be of interest.
2. **Triage page**: (only accessible to admin) Features a series of triage questions used to determine the ESI of that particular patient.
3. **Admin dashboard**: (only accessible to admin) Features a scrollable queue of all patients in line, along with their ID, arrival time, and ESI. The admin is able to add users to the queue using the triage page and to remove users when it is their turn for care. This then alerts the patient through the homepage.
*Note*: When any update occurs, all pages are updated to reflect the latest changes.
---
### Running this program
1. Make sure node and npm are installed on the system.
2. Install all node modules with `npm install`.
3. Build the application with `npm run build`.
4. Run the application with `npm run start`.
5. Access the triage page at `localhost:8080/triage` and add a patient to the queue.
6. After adding the patient to the queue, the page will display a link to the patient's individual page along with a link to the dashboard. | ## Inspiration
Lots of applications require you to visit their website or application for initial tasks such as signing up on a waitlist to be seen. What if these initial tasks could be performed at the convenience of the user on whatever platform they want to use (text, slack, facebook messenger, twitter, webapp)?
## What it does
In a medical setting, allows patients to sign up using platforms such as SMS or Slack to be enrolled on the waitlist. The medical advisor can go through this list one by one and have a video conference with the patient. When the medical advisor is ready to chat, a notification is sent out to the respective platform the patient signed up on.
## How I built it
I set up this whole app by running microservices on StdLib. There are multiple microservices responsible for different activities such as sms interaction, database interaction, and slack interaction. The two frontend Vue websites also run as microservices on StdLib. The endpoints on the database microservice connect to a MongoDB instance running on mLab. The endpoints on the sms microservice connect to the MessageBird microservice. The video chat was implemented using TokBox. Each microservice was developed one by one and then also connected one by one like building blocks.
## Challenges I ran into
Initially, getting the microservices to connect to each other, and then debugging microservices remotely.
## Accomplishments that I'm proud of
Connecting multiple endpoints to create a complex system more easily using microservice architecture.
## What's next for Please Health Me
Developing more features such as position in the queue and integrating with more communication channels such as Facebook Messenger. This idea can also be expanded into different scenarios, such as business partners signing up for advice from a busy advisor, or fans being able to sign up and be able to connect with a social media influencer based on their message. | ## Why Raven?
*Why did the raven become a style guru? Because it always knows how to "feather" the nest!*
But jokes apart, navigating the maze of beard styles can be as tricky as finding a feather in a bird's nest(pun intended). Raven simplifies the grooming journey, ensuring your facial game is always on point.
## What it does
Raven employs advanced image manipulation techniques and algorithms for facial parameter detection, identifying key structural areas, and analyzing curves. Leveraging mathematical concepts like polynomial approximation and Gaussian distribution, the algorithm extensively explores its database to provide personalized suggestions for the ideal beard style.
Raven's interactive UI allows the user to upload a picture, generate the best style and subsequently save the image, giving a seamless user experience.
Or, for those who speak better emoji than code: 👨 👉 📸 👉 🧔♂️ 👉 👍
## How we built it
We used Java/JavaCV to code the backend, making use of existing public algorithms such as Kazumi's algorithm and CascadeClassifier. We curated a database of different beard types and styles from Adobe. For the frontend, we used JavaFX to code the UI, allowing the user to upload, generate and save images with just a click of a button.
## Challenges we ran into
We ran into a plethora of different challenges when coding Raven. For starters, we faced a high image processing time initially, after which we used a multi-thread system to allow the UI to run on a single thread and the backend on another to allow a swift and seamless image processing time.
Secondly, whilst implementing the facial hair overlay step, we faced errors with proper alignment and sizing. We had to adjust and implement stricter manipulation techniques such as coordinate mapping and structure cropping to ensure the facial hair aligns with the curves of the face all within 24H which surely will a memory we never forget.
## Accomplishments that we're proud of
We are proud of every step that we accomplished through the journey of Raven. Our notable achievements include the implementation of a multi-thread system which has significantly reduced our image processing time.
We are particularly proud of browsing mathematical models such as Gaussian Distribution and coordinate systems and using their knowledge to alleviate our implementation of Raven's image processing capability.
## What we learned
We gained a deep insight into computer image processing and analysis. We learnt some important techniques to manipulate pictures, extract required areas of the face and use certain key facial parameters to add any overlays on the face.
All in all, since most of us were first time hackers, we learnt how to carry out such a task within 24 hours while keeping ourselves cool and open to learning something new every 30 minutes.
## Sustainability
The idea of making Raven sustainable was key to us. We believe that by implementing a more optimized and efficient algorithm, we bring a more cost-effective and less computational-costing solution to the table. Furthermore, this shift from on-paper facial hair suggestions to digital suggestions reduces the need for paper in general.
## What's next for Raven
We could definitely extend this idea to add any overlays on the face in the future. This could include suggestions for the most suitable nose/lip accessories, head hair, face tattoos etc. Conclusively, Raven has the potential to be a one-stop-shop for any style suggester.
## Credentials
1) Kazumis Algorithm: “Face Alignment with Part-Based Modeling” - Vahid Kazemi [link]([email protected])
2) Haar Cascades Frontal Algorithm: [link](https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_alt2.xml) | winning |
# Project Story: Proactive, AI-Driven Cooling and Power Management
## About the Project
### Inspiration and Market Validation
Our project was inspired by insights gained from the NSF ICorps course in May, where we received strong market validation for the problem we aimed to solve. To ensure we were addressing a real need, we conducted over 20 customer interviews with representatives from big tech firms, industry experts, and data center professionals who specialize in cooling and power distribution services. These interviews confirmed that the problem of inefficient cooling and power management in data centers is prevalent and impactful.
### Learning and Development
Throughout this project, we learned extensively about the transition from CPUs to GPUs in data centers and the associated challenges in cooling and power distribution. Traditional air-cooling systems are becoming inadequate as data centers shift towards liquid cooling to manage the increased heat output from GPUs. Despite this transition, even large tech companies are still experimenting with and refining liquid cooling systems. This gap in the market presents an opportunity for innovative solutions like ours.
### Building the Project
Our hackathon project aimed to create an MVP that leverages AI to make cooling and power management predictive rather than reactive. We used SLURM, a workload manager and job scheduling system, to gather data about GPU availability and job scheduling. Our system predicts when and where jobs will run and proactively triggers the cooling systems before the GPUs heat up, thereby optimizing cooling efficiency.
### Challenges Faced
We faced several challenges during the development of this project:
1. **Data Collection:** Gathering accurate and comprehensive historical job data, node availability, and job scheduling logs from SLURM was time-consuming and required meticulous attention to detail.
2. **Model Accuracy:** Building a predictive model that could accurately forecast job run times and node allocations was complex. We tested various machine learning models, including Random Forest, Gradient Boosting Machines, LSTM, and GRU, to improve prediction accuracy.
3. **Integration with Existing Systems:** Integrating our predictive system with existing data center infrastructure, which traditionally relies on reactive cooling mechanisms, required careful planning and implementation.
## Implementation Details
### Steps to Implement the Project
1. **Data Collection:**
* **Historical Job Data:** Collect data on job submissions, including job ID, submission time, requested resources (CPU, memory, GPUs), priority, and actual start and end times.
* **Node Data:** Gather information on node availability, current workload, and resource usage.
* **Job Scheduling Logs:** Extract SLURM scheduling logs that detail job allocation and execution.
2. **Feature Engineering:**
* **Create Relevant Features:** Include features such as time of submission, day of the week, job priority, resource requirements, and node state (idle, allocated).
* **Time Series Features:** Use lag features (e.g., previous job allocations) and rolling statistics (e.g., average load in the past hour).
3. **Model Selection:**
* **Classification Models:** Random Forest, Gradient Boosting Machines, and Logistic Regression for predicting server allocation.
* **Time Series Models:** LSTM and GRU for predicting the time of allocation.
* **Regression Models:** Linear Regression and Decision Trees for predicting the time until allocation.
4. **Predictive Model Approach:**
* **Data Collection:** Gather historical scheduling data from SLURM logs, including job submissions and their attributes, node allocations, and resource usage.
* **Feature Engineering:** Develop features related to job priority, requested resources, expected runtime, and node state.
* **Modeling:** Use a classification approach to predict the node allocation for a job or a regression approach to predict resource allocation probabilities and select the node with the highest probability.
### Results and Benefits
By implementing this predictive cooling and power management system, we anticipate the following benefits:
1. **Increased Cooling Efficiency:** Proactively triggering cooling systems based on job predictions reduces the power required for cooling by at least 10%, resulting in significant cost savings.
2. **Extended Equipment Life:** Optimized cooling management increases the lifespan of data center equipment by reducing thermal stress.
3. **Environmental Impact:** Reducing the power required for cooling contributes to lower overall energy consumption, aligning with global sustainability goals.
### Future Plans
Post-hackathon, we plan to further refine our MVP and seek early adopters to implement this solution. The transition to GPU-based data centers is an ongoing trend, and our proactive cooling and power management system is well-positioned to address the associated challenges. By continuing to improve our predictive models and integrating more advanced AI techniques, we aim to revolutionize data center operations and significantly reduce their environmental footprint. | ## Inspiration
**Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness.
## Problem
Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in.
Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting.
## What is fairness?
There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group.
## What our app does
**jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness.
### Reweighing Algorithm
If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training.
## How we built it
We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier.
## Challenges we ran into
Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric.
## Accomplishments that we're proud of
We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her.
## What we learned
Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts.
## What's next for jobFAIR
Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics. | ## 🌟 Inspiration
We were inspired by a friend's story about his family’s energy company in France, where they sometimes faced negative pricing—meaning they had to pay to offload surplus electricity during periods of low demand. It seemed unfair that people generating clean energy could be penalized for overproduction.
As of 2023, over 60 countries generate more than 10% of their electricity from wind energy. However, when supply exceeds demand, prices can drop below zero, forcing providers to pay others to take excess electricity. A striking example occurred in Germany in 2016, where prices fell to -130 euros per megawatt-hour.
Meanwhile, data centers consumed 416 terawatt-hours of electricity in 2023, about 3% of global power. This figure is rising, especially with the growing AI sector. We thought—why not harness that excess renewable energy for cloud computing? This idea became EcoCompute.
## ☁️ What it does
EcoCompute is a cloud computing platform that allows energy providers to monetize surplus renewable energy during periods of negative pricing. By offering affordable compute resources, it converts wasted energy into valuable computing power. Providers can deploy nodes, which dynamically allocate workloads based on real-time energy production. This gives users access to low-cost computing while helping providers avoid paying to offload excess energy.
## 🛠️ How we built it
We developed a React-based front end that operates like a cloud notebook, providing an interactive experience for users. This front end connects via WebSockets to a FastAPI server, which serves as the backbone of our application.
The server manages connections to energy provider nodes, which are orchestrated using Docker. These nodes not only handle compute tasks but also utilize real-time data to adjust workloads according to energy availability. We focused primarily on German data for wind power, which made it easier to access open-source information. This decision allowed us to build a more robust and reliable platform since Germany has a wealth of data available and is a leader in renewable energy production.
Additionally, we incorporated statistical models that predict energy production and pricing trends, allowing us to optimize the allocation of compute tasks during surplus periods. This proactive approach ensures that we maximize the use of renewable energy while providing users with efficient computing resources.
## 🚧 Challenges we ran into
Finding open-source data on renewable energy and managing WebSockets effectively were significant challenges. However, by focusing on German data, we found it easier to obtain relevant statistics and trends. We also learned that Europe, being a leader in wind energy, experiences the most negative pricing situations.
## 🏆 Accomplishments that we're proud of
We’re proud to have created a platform that turns wasted energy into useful compute power. Not only does this reduce energy waste, but it also provides an eco-friendly, cost-effective solution for compute tasks, benefiting both energy providers and users.
## 📚 What we learned
Through this project, we gained a deeper understanding of renewable energy markets, particularly the complexities of balancing energy production with computing needs. We learned that while battery storage is crucial for managing surplus energy, it can be prohibitively expensive. By focusing on regions with existing infrastructure for energy storage, such as dams in Québec, we can better manage excess energy.
## 🚀 What's next for EcoCompute
Our next step is to enhance the computing experience by allowing users to upload their own containers for execution on the platform. We plan to expand EcoCompute by partnering with more energy providers worldwide and integrating additional renewable sources like solar and hydroelectric power. Our goal is to make EcoCompute the go-to platform for turning surplus energy into sustainable cloud computing. | winning |
## Inspiration
We were inspired by early text-based RPG games as well as current environmental issues plaguing our planet.
## What it does
Operation Foliage is a short and simple text-based RPG game that allows the user to combat monsters and play small minigames to achieve a larger goal.
## How we built it
This game was built entirely in Java.
## Challenges we ran into
A lot of our team members were inexperienced programmers, so there were hurdles in planning out how exactly to implement the various game features we want.
## Accomplishments that we're proud of
Once you get past the hurdles and your code successfully runs, you get a big sense of accomplishment. We're most proud of our combat system and working shop features.
## What we learned
We learned a lot about utilizing Java and more specifically object oriented programming.
## What's next for Operation Foliage - A text-based RPG
This game will continue to be developed as a passion project or hobby. | ## Inspiration
Several presentations at our school about raising awareness of mental health encouraged us to create this project.
## What it does
Through entertaining and encouraging messages delivered utilizing a visual short story, our project raises awareness about mental health, particularly depression.
## How we built it
First we brainstormed ideas on how the style of the visual short story will look like. After deciding on a minimalistic/cartoon art style, we began designing the story. We wanted an engaging and meaningful message while keeping the story short. After creating the outline of the story, we began adding dialogue and appropriate music to the game. But in order to make a really strong point, we also incorporated an emotive poem by Alex Elle that was relevant to our message at the conclusion of the game.
We initially designed this story using java but we ran into many problems, specifically packaging executables, so we decided to switch to unity; a software that, although we had no clue how to utilize, was effective for creating games. We split up into parts to create parts of the game, ex: someone worked on the main menu, while someone writes code for the game or creates the images for the character. We used paint.net to create the characters and C# as the programming language.
## Challenges we ran into
During these chaotic 36 hours, we had to overcome a number of challenges. Learning new pieces of software and languages, and working together to learn while viewing several tutorials was our biggest obstacle. We had never used C#, Paint.NET, or unity before, but we were determined to push ourselves and do something fantastic. Another problem we had was writing the story since we wanted to capture the player's attention and convey a meaningful message in a short period. The game is a lot shorter than we wanted it to be and that is mainly due to time constraints in addition to technical issues.
## Accomplishments that we're proud of
We are proud of finishing the game considering we had no prior experience in all the resources we used. We are also pleased that we were able to create something that can help others and make an impact in people's lives.
## What we learned
We learned something very important during this hackathon. In order to make the most of our time, we learnt how to acquire new things quickly and then share that knowledge with one another. This hackathon especially helped us gauge how much we can realistically complete in a short timespan.
## What's next for Talk To Me
Talk to Me will continue to spread awareness and educate people about the serious dangers of depression and mental health. We advocate individuals to ask for assistance when necessary and check in on loved ones. Given the framework we created for ourselves. Adding more content to the game would be incredibly easy, which means it is very possible we could expand this to a full proper game. | ## Inspiration
Medical hospitals often conduct “discharge interviews” with healing patients deemed healthy enough to leave the premises. The purpose of this test is to determine what accommodations patients will require post-hospital-admittance. For instance, elderly patients who live in multi-level houses might require extra care and attention. The issue, however, is that doctors and nurses must conduct such interviews and then spend about 30 to 40 minutes afterwards documenting the session. This is an inefficient use of time. A faster, streamlined system would allow medical professionals to spend their time on more pressing matters, such as examining or interviewing more patients.
## What it does
Airscribe is a smart, automated interview transcriber that is also able to do a cursory examination of the exchange and give recommendations to the patient. (We don’t intend for this to be the sole source of recommendations, however. Doctors will likely review the documents afterwards and follow up accordingly.)
## How we built it
Speech-to-text: built on IBM Watson.
Text-to-better-text: Interviewer and patient’s comments are processed with a Python script that breaks text into Q & A dialogue. Algorithm evaluates patient’s response to find key information, which is displayed in easy-to-read, standardized survey format. Based on patient’s responses to questions, suggestions and feedback for the patient after hospital discharge are generated.
## Challenges we ran into
Getting speech recognition to work, recognizing question types, recognizing answer types, formatting them into the HTML.
## Accomplishments that we're proud of
After talking with health professionals we decided this idea was a better direction than our first idea and completed it in 8 hours.
## What we learned
Teamwork! Javascript! Python!
Web development for mobile is difficult (\* cough \* phonegap) !
## What's next for Airscribe
* Smarter, more flexible natural language processing.
* Draw from a database and use algorithms to generate better feedback/suggestions.
* A more extensive database of questions. | losing |
## Inspiration
My own research publication was actually founded on Heuristical Analysis, with pathfinding, aimed to aid first responders at dangerous environments.
## What it does
The algorithm takes Waze JSON data, weighs down congestion paths, and finds the most efficient path to take in order to avoid traffic congestion and to save time.
## How I built it
I used my mathematical skills and knowledge of algorithms to optimize and create a powerful and working rendition of an algorithm that can optimize delivery routes.
## Challenges we ran into
Obtaining the data from the map was miserable, but doable.
## What's next for HEURE
* A formal research publication! | ## Inspiration
The inspiration for our project came from hearing about the massive logistical challenges involved in organising evacuations for events such as Hurricane Florence. We felt that we could apply our knowledge of solving optimisation problems to great effect in this area.
## What it does
ResQueue is a web-app that is designed to be deployed by an aid organisation that is organising rescue or evacuation efforts. It provides an interface for people in need of rescue to mark their location and the urgency of their request on a map.
In the admin interface the aid organisation is able to define the resources it has in terms of capacity and quantity of vehicles (i.e. 3 busses with 50 seats. 5 minibuses with 10 seats). Using clustering followed by path finding a route is generated for each vehicle that provides an efficient overall plan for rescuing as many people as possible, as fast as possible.
## How we built it
The WebApp was built using Python combined with the Flask web framework. It was all hosted on Azure, with the database being an Azure Cosmo DB instance. This infrastructure setup would allow us to scale the project in times of crisis.
The routing is done using OpenStreetMap data, combined with the C++ based OSRM project. Groups of people who are required to be rescued are clustered using a minimum spanning tree based approach, combining additional weather data obtained from the IBM Cloud Weather API and self-reported required urgency. A greedy heuristic of the Travelling Salesman (farthest-insertion algorithm) was used to select the final order of visiting the users in each cluster.
## Challenges we ran into
As the technical core of the problem, we were attempting to solve (The Travelling Salesman Problem and other vehicle routing problems) is NP-Hard. Due to the fact that we knew that no exact algorithm with an acceptable running time exists we needed to determine a tradeoff between execution time and performance. We did this by reading papers and experimenting with various heuristics and implementations.
We set up automatic scaling resources with Azure, this was necessary to allow for preprocessing of the OpenStreetMap data to be done in a reasonable timeframe. OSRM also used a non-standard GPS coordinate ordering. this cost us a tremendous amount of time and sanity.
## Accomplishments that we're proud of
We managed to complete our project to a standard we can be proud of in the time frame allocated. We hope that what we have developed can be used to help those in need.
In doing so we came up with novel solutions to the tough technical challenges we faced and made difficult decisions and tradeoffs along the way.
## What we learned
We learned a great deal about the current state of the art in solving TSP. It was also a first for all of us using Azure's services.
## What's next for ResQueue
We will probably continue to add to ResQueue, there were many other cool features that didn't make the final cut suggested both from within the team and also from passerby hackathon-ers.
* A companion app for both drivers and rescuees to allow Uber-like tracking
* Support for special cases such as needing wheelchair accessible vehicles
* General improvements to the algorithms used, both efficiency and accuracy
* Allowing admins to geofence the area they are able to service. | ## Inspiration
This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures.
## What it does
Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics.
Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database.
## How we built it
We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack.
## Challenges we ran into
Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected.
Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code.
## Accomplishments that we're proud of
Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges.
## What we learned
We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run.
## What's next for Supermaritan
In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future. | losing |
## Inspiration
Neuro-Matter is an integrated social platform designed to combat not one, but 3 major issues facing our world today: Inequality, Neurological Disorders, and lack of information/news.
We started Neuro-Matter with the aim of helping people facing the issue of Inequality at different levels of society. Though it was assumed that inequality only leads to physical violence, its impacts on neurological/mental levels are left neglected.
Upon seeing the disastrous effects, we have realized the need of this hour and have come up with Neuro-Matter to effectively combat these issues in addition to the most pressing issue our world faces today: mental health!
## What it does
1. "Promotes Equality" and provides people the opportunity to get out of mental trauma.
2. Provides a hate-free social environment.
3. Helps People cure the neurological disorder
4. Provide individual guidance to support people with the help of our volunteers.
5. Provides reliable news/information.
6. Have an AI smart chatbot to assist you 24\*7.
## How we built it
Overall, we used HTML, CSS, React.js, google cloud, dialogue flow, google maps Twilio's APIs. We used Google Firebase's Realtime Database to store, organize, and secure our user data. This data is used to login and sign up for the service. The service's backend is made with node.js, which is used to serve the webpages and enable many useful functions. We have multiple different pages as well like the home page, profile page, signup/login pages, and news/information/thought sharing page.
## Challenges we ran into
We had a couple of issues with databasing as the password authentication would work sometimes. Moreover, since we used Visual Studio multiplayer for the first time it was difficult as we faced many VSCode issues (not code related). Since we were working in the same time zones, it was not so difficult for all of us to work together, but It was hard to get everything done on time and have a rigid working module.
## Accomplishments that we're proud of
Overall, we are proud to create a working social platform like this and are hopeful to take it to the next steps in the future as well. Specifically, each of our members is proud of their amazing contributions.
We believe in the module we have developed and are determined to take this forward even beyond the hackathon to help people in real life.
## What we learned
We learned a lot, to say the least!! Overall, we learned a lot about databasing and were able to strengthen our React.js, Machine Learning, HTML, and CSS skills as well. We successfully incorporated Twilio's APIs and were able to pivot and send messages. We have developed a smart bot that is capable of both text and voice-based interaction. Overall, this was an extremely new experience for all of us and we greatly enjoyed learning new things. This was a great project to learn more about platform development.
## What's next for Neuro-Matter
This was an exciting new experience for all of us and we're all super passionate about this platform and can't wait to hopefully unveil it to the general public to help people everywhere by solving the issue of Inequality. | ## Inspiration
Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time.
## Our Team
* Vanessa Seto, Systems Design Engineering at the University of Waterloo
* Daniel Wang, CS at the University of Toronto
* Quinnan Gill, Computer Engineering at the University of Pittsburgh
* Sanchit Batra, CS at the University of Buffalo
## What it does
Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes.
Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits.
## How we built it
We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch.
For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP.
Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session.
## Challenges we ran into
* Integrated a chart library in Angular that met our project’s complex data needs
* Working with raw data
* Audio processing and conversions for session video clips
## Accomplishments that we're proud of
* Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text
* Implementing the entire front-end from scratch in Angular, with the integration of real-time data
* Great UI Design :)
## What's next for Inkblot
* Database integration: Keeping user data, keeping historical data, user profiles (login)
* Twilio Integration
* HIPAA Compliancy
* Investigate blockchain technology with the help of BlockStack
* Testing the product with professional therapists | ## Inspiration
After playing laser tag as kids and recently rediscovering it as adults but not having any locations near us to play, our team looked for a solution that wasn't tied down to a physical location. We wanted to be able to play laser tag anywhere, anytime!
## What it does
Quick Connect Laser Tag allows for up to 20 players to pick up one of our custom-made laser blaster and sensor device, and allows players to engage in a fun game of laser tag in 2 different game modes!
Game Mode 1: Last One Standing
In Last One Standing, all players have 10 lives. The last player with remaining lives is the winner
Game Mode 2: Time Deathmatch
In Time Deathmatch. A game duration is set, and the player with the most eliminations at the end of the game is the winner.
## How we built it
We 3D printed a housing for the laser blaster which contains an ESP32, laser transmitter, and laser sensor. We designed the laser blasters to communicate with each other during gameplay over ESP32Now peer-to-peer communication to relay point scoring data. When the laser sensor detects a hit, it can send an acknowledgement to the blaster that made the hit in order to correctly assign points to players. We worked with an OLED display over I2C to show the user their remaining lives or points scored and game time remaining, depending on the game mode.
## Challenges we ran into
To implement game mode two, we initially tried to use the HDK Android development board to act as a server that could communicate with the individual laser guns to tally point totals, however we were unable to get the board to work with WiFi over the eduroam network, or enable Direct WiFi to the laser blasters.
## Accomplishments that we're proud of
We're proud that when we were unable to work with the HDK Android development board, we were able to pivot to a different technology that could enable us to get our blasters to communicate. We'd never used the ESPNow communication protocol, and successfully used it to connect a theoretical limit of 20 blaster over a field range of 200 yards!
## What we learned
* ESPNow using MAC addresses
+ Asynchronously receiving packets
* 3D Printing
* Working with OLED displays
* Using lasers to trigger sensors over long distances
## What's next for Quick Connect Laser Tag
* Building more blasters to support more players
* Custom PCB
* Improving blaster housing | winning |
## Inspiration
One of our team members' grandfathers went blind after slipping and hitting his spinal cord, going from a completely independent individual to reliant on others for everything. The lack of options was upsetting, how could a man who was so independent be so severely limited by a small accident. There is current technology out there for blind individuals to navigate their home, however, there is no such technology that allows blind AND frail individuals to do so. With an increasing aging population, Elderlyf is here to be that technology. We hope to help our team member's grandfather and others like him regain his independence by making a tool that is affordable, efficient, and liberating.
## What it does
Ask your Alexa to take you to a room in the house, and Elderlyf will automatically detect which room you're currently in, mapping out a path from your current room to your target room. With vibration disks strategically located underneath the hand rests, Elderlyf gives you haptic feedback to let you know when objects are in your way and in which direction you should turn. With an intelligent turning system, Elderlyf gently helps with turning corners and avoiding obstacles.
## How I built it
With a Jetson Nano and RealSense Cameras, front view obstacles are detected and a map of the possible routes are generated. SLAM localization was also achieved using those technologies. An Alexa and AWS Speech to Text API was used to activate the mapping and navigation algorithms. By using two servo motors that could independently apply a gentle brake to the wheels to aid users when turning and avoiding obstacles. Piezoelectric vibrating disks were also used to provide haptic feedback in which direction to turn and when obstacles are close.
## Challenges I ran into
Mounting the turning assistance system was a HUGE challenge as the setup needed to be extremely stable. We ended up laser-cutting mounting pieces to fix this problem.
## Accomplishments that we're proud of
We're proud of creating a project that is both software and hardware intensive and yet somehow managing to get it finished up and working.
## What I learned
Learned that the RealSense camera really doesn't like working on the Jetson Nano.
## What's next for Elderlyf
Hoping to incorporate a microphone to the walker so that you can ask Alexa to take you to various rooms even though the Alexa may be out of range. | ## Inspiration
We were inspired by the impact plants have on battling climate change. We wanted something that not only identifies and gives information about our plants, but also provides an indication about what others think about the plant.
## What it does
You can provide an image of a plant, either by uploading a local image or URL to an image. It takes the plant image and matches it with a species, giving you a similarity score, a scientific name, as well as common names. You can click on it to open a modal that displays more information, including the sentiment analysis of its corresponding Wikipedia page as well as a more detailed description of the plant.
## How we built it
We used an API from [Pl@ntNet](https://identify.plantnet.org/) that utilizes image recognition to identify plants. To upload the image, we needed to provide a link to the image path as a parameter. In order to make this compatible with locally uploaded images, we first saved these into Firebase. Then, we passed the identified species through an npm web scraping library called WikiJS to pull the text content from the Wikipedia page. Finally, we used Google Cloud's Natural Language API to perform sentiment analysis on Wikipedia.
## Challenges we ran into
* Finding sources that we can perform sentiment analysis on our plant
* Being able to upload a local image to be identified, which we resolved using Firebase
* Finding the appropriate API/database for plants
* Connecting frontend with backend
## Accomplishments that we're proud of
* Trying out Firebase and Google Cloud for the first time
* Learning frontier image recognition and NLP softwares
* Integrating API's to gather data
* Our beautiful UI
## What we learned
* How to manage and use the Google Cloud's Natural Language API and the PlantNet API
* There are a lot of libraries and API's that already exist to make our life easier
## What's next for Plantr
* Find ways to get carbon sequestration data about the plant
* Apply sentiment analysis on blog posts about plants to obtain better data | ## Inspiration
This generation of technological innovation and human factor design focuses heavily on designing for individuals with disabilities. As such, the inspiration for our project was an application of object detection (Darknet YOLOv3) for visually impaired individuals. This user group in particular has limited visual modality, which the project aims to provide.
## What it does
Our project aims to provide the visually impaired with sufficient awareness of their surroundings to maneuver. We created a head-mounted prototype that provides the user group real-time awareness of their surroundings through haptic feedback. Our smart wearable technology uses a computer-vision ML algorithm (convolutional neural network) to help scan the user’s environment to provide haptic feedback as a warning of a nearby obstacle. These obstacles are further categorized by our algorithm to be dynamic (moving, live objects), as well as static (stationary) objects. For our prototype, we filtered through all the objects detected to focus on the nearest object to provide immediate feedback to the user, as well as providing a stronger or weaker haptic feedback if said object is near or far respectively.
## Process
While our idea is relatively simple in nature, we had no idea going in just how difficult the implementation was.
Our main goal was to meet a minimum deliverable product that was capable of vibrating based on the position, type, and distance of an object. From there, we had extra goals like distance calibration, optimization/performance improvements, and a more complex human interface.
Originally, the processing chip (the one with the neural network preloaded) was intended to be the Huawei Atlas. With the additional design optimizations unique to neural networks, it was perfect for our application. After 5 or so hours of tinkering with no progress however, we realized this would be far too difficult for our project.
We turned to a Raspberry Pi and uploaded Google’s pre-trained image analysis network. To get the necessary IO for the haptic feedback, we also had this hooked up to an Arduino which was connected to a series of haptic motors. This showed much more promise than the Huawei board and we had a functional object identifier in no time. The rest of the night was spent figuring out data transmission to the Arduino board and the corresponding decoding/output.
With only 3 hours to go, we still had to finish debugging and assemble the entire hardware rig.
## Key Takeaways
In the future, we all learned just how important having a minimum deliverable product (MDP) was. Our solution could be executed with varying levels of complexity and we wasted a significant amount of time on unachievable pipe dreams instead of focusing on the important base implementation.
The other key takeaway of this event was to be careful with new technology. Since the Huawei boards were so new and relatively complicated to set up, they were incredibly difficult to use. We did not even use the Huawei Atlas in our final implementation meaning that all our work was not useful to our MDP.
## Possible Improvements
If we had more time, there are a few things we would seek to improve.
First, the biggest improvement would be to get a better processor. Either a Raspberry Pi 4 or a suitable replacement would significantly improve the processing framerate. This would make it possible to provide more robust real-time tracking instead of tracking with significant delays.
Second, we would expand the recognition capabilities of our system. Our current system only filters for a very specific set of objects, particular to an office/workplace environment. Our ideal implementation would be a system applicable to all aspects of daily life. This means more objects that are recognized with higher confidence.
Third, we would add a robust distance measurement tool. The current project uses object width to estimate the distance to an object. This is not always accurate unfortunately and could be improved with minimal effort. | partial |
## Inspiration
We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers.
Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level.
## What is does
kizuna has aspirations to implement the following features:
* Livestream and upload videos
* View videos (both on a big screen and in a small mini-player for multitasking)
* Interact with friends (on stream, in a private chat, or in public chat)
* View activities of friends
* Highlights smaller, local, and upcoming streamers
## How we built it
Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network.
The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection.
## Challenges we ran into
We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project!
You can discover more about Livepeer's API [here](https://livepeer.org/).
## Accomplishments that we're proud of
Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project.
## What we learned
Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project.
The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS.
The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions.
## What's next for kizuna
Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application:
* Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind
* Implementing Lens Protocol to create a unified social community in our application
* Redesign some small aspects of each page
* Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream. | ## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on. | ## Inspiration
Given that students are struggling to make friends online, we came up with an idea to make this easier. Our web application combines the experience of a video conferencing app with a social media platform.
## What it does
Our web app targets primarily students attending college lectures. We wanted to have an application that would allow users to enter their interests/hobbies/classes they are taking. Based on this information, other students would be able to search for people with similar interests and potentially reach out to them.
## How we built it
One of the main tools we were using was WebRTC that facilitates video and audio transmission among computers. We also used Google Cloud and Firebase for hosting the application and implementing user authentication. We used HTML/CSS/Javascript for building the front end.
## Challenges we ran into
Both of us were super new Google Cloud + Firebase and backend in general. The setup of both platforms took a significant amount of time. Also, we had some troubles with version control on Github.
## Accomplishments we are proud of
GETTING STARTED WITH BACKEND is a huge accomplishment!
## What we learned
Google Cloud, FIrebase, WebRTC - we got introduced to all of these tools during the hackathon.
## What’s next for Studypad
We will definitely continue working on this project and implement other features we were thinking about! | winning |
## Inspiration
The MIT mailing list free-food is frequently contacted by members of the community offering free food. Students rush to the location of free food, only to find it's been claimed. From the firsthand accounts of other students as well as personal experience, we know that it's incredibly hard to respond fast enough to free food when notified by the mailing list.
## What it does
F3 collects information about free food from the free-food mailing list. The location of the food is parsed from the emails, and then using the phone's GPS and the whereis.mit.edu API, the distance to the food is calculated. Using a combination of the distance and the age of the email, the food listings are sorted.
## How we built it
This app was built with Android Studio using Java and a few different libraries and APIs.
Emails from [[email protected]](mailto:[email protected]) are automatically forward to a Gmail account that the app has access to. Using an android mail library, we parsed the location (in the form of various names, nicknames, to determine which building food is located at. Then, the user's location is taken to calculate the distance between the free food and the phone's GPS location. The user receives a list of free food, including the building/location, distance from their own coordinates, and the 'age' of the free food (how long ago the email was sent).
## Challenges we ran into
At first, we wrote the mail reading/parsing code in vanilla Java outside of Android studio. However, when we tried to integrate it with the app, we realized that Java libraries aren't necessarily compatible with Android. Hence, a considerable of time late at night was put toward reworking the mail code to be compatible with Android.
Also, there were difficulties in retrieving GPS coordinates, especially with regard to accessing fine location permissions and various stability issues.
## Accomplishments that we're proud of
* Creating our first app (none of us had prior Android development experience)
* Making horrible puns
## What we learned
* How to sort-of use Android Studio
* How email/IMAP works
* How to use Git/GitHub
* How to use regular expressions
## What's next for f3
* Settings for a search radius
* Refresh periodically in background
* Push(een) notifications
* More Pusheen | ## Inspiration
AI is a super powerful tool for those who know how to prompt it and utilize it for guidance and education rather than just for a final answer. As AI becomes increasingly more accessible to everyone, it is clear that teaching the younger generation to use it properly is incredibly important, so that it does not have a negative impact on their learning and development. This thought process inspired us to create an app that allows a younger child to receive AI assistance in a way that is both fun and engaging, while preventing them from skipping steps in their learning process.
## What it does
mentora is an interactive Voice AI Tutor geared towards elementary and middle school aged students which takes on the form of their favorite fictional characters from movies and TV shows. Users are provided with the ability to write their work onto a whiteboard within the web application while chatting with an emotionally intelligent AI who sounds exactly like the character of their choice. The tutor is specifically engineered to guide the user towards a solution to their problem without revealing or explaining too many steps at a time. mentora gives children a platform to learn how to use AI the right way, highlighting it as a powerful and useful tool for learning rather than a means for taking short cuts.
## How we built it
We built mentora to be a full-stack web application utilizing React for the frontend and Node.js for the backend, with the majority of our code being written in javascript and typescript. Our project required integrating several APIs into a seamless workflow to create an intuitive, voice-driven educational tool. We started by implementing Deepgram, which allowed us to capture and transcribe students' voice inputs in real time. Beyond transcription, Deepgram’s sentiment analysis feature helped detect emotions like frustration or confusion in the child’s tone, enabling our AI to adjust its responses accordingly and provide empathetic assistance.
Next, we integrated Cartesia to clone character voices, making interactions more engaging by allowing children to talk to their favorite characters. This feature gave our AI a personalized feel, as it responded using the selected character’s voice, making the learning experience more enjoyable and relatable for younger users.
For visual interaction, we used Tldraw to develop a dynamic whiteboard interface. This allowed children to upload images or draw directly on the screen, which the AI could interpret to provide relevant feedback. The whiteboard input was synchronized with the audio input, creating a multi-modal learning environment where both voice and visuals were processed together.
Finally, we used the OpenAI API to tie everything together. The API parsed contextual information from previous conversations and the whiteboard to generate thoughtful, step-by-step guidance. This integration ensured the AI could provide appropriate hints without giving away full solutions, fostering meaningful learning while maintaining real-time responsiveness.
## Challenges we ran into
A summary of our biggest challenges:
Combining data from our whiteboard feature with our microphone feature to make a single openAI API call.
Learning how to use and integrate Deepgram and Cartesia APIs to emotionally analyze and describe our audio inputs, and voice clone for AI responses
Finding a high quality photo of Aang from Avatar the Last Airbender
## Accomplishments that we're proud of
We are really proud of the fact that we successfully brought to life the project we set out to build and brainstormed for, while expanding on our ideas in ways that we wouldn’t have even imagined before this weekend. We are also proud of the fact that we created an application that could benefit the next generation by shedding a positive light on the use of AI for students who are just becoming familiar with it.
## What we learned
Building mentora taught us how to integrate multiple APIs into a seamless workflow. We gained hands-on experience with Deepgram, using it to transcribe voice inputs and perform sentiment analysis. We also integrated Cartesia for voice cloning, allowing the AI to respond in the voice of the character selected by the user. Using Tldraw, we created a functional whiteboard interface where students could upload images or write directly, providing visual input alongside audio input for a smoother learning experience. Finally, we used an OpenAI API call to integrate the entire functionality.
The most valuable part of the process was learning how to design a workflow where multiple technologies interacted harmoniously—from capturing voice input and analyzing emotions to generating thoughtful responses through avatars. We also learned how important it was to plan the integration ahead of time. We had many ideas, and we had to try out all of them to see what would work and what would not. While this was initially challenging due to all the moving pieces, creating a structure for what we wanted the final project to look like allowed us to keep the final goal in mind. On the other hand, it was important that we were willing to change focus when better ideas were created and when old ideas had flaws.
Ultimately, this project gave us deeper insights into full-stack development and reinforced the balance of structure vs. adaptability when creating a new product.
## What's next for mentora
There are many next steps we could take and directions we could go with mentora. Ideas we have discussed are deploying the website, creating a custom character creation menu that allows the users to input new characters and voices, improve latency up to real-time speed for back and forth conversation, and broaden the range of subjects that the tutor is well prepared to assist with. | ## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses | losing |
# DashLab
DashLab is a way for individuals and businesses to share and collaborate within a data-driven environment through a real-time data visualization dashboard web application.
# The Overview
Within any corporate environment you'll find individuals working on some sort of analysis or analytics projects in general. Under one company, and sometimes even within teams, you'll find a loose approach to drilling down to the necessary insight that in the end drives both the low-level, daily decisions and the higher level, higher pressure ones. What often happens is at the same time these teams or employees will struggle to translate these findings to the necessary personnel.
There's so much going on when it comes to deriving necessary information that it often becomes less powerful as a business function. DashLab provides a simple, eloquent, and intuitive solution that brings together the need to save, share, and show this data. Whether it's a debriefing between internal teams across different departments, or be it a key process that involves internal and external influence, DashLab provides users with a collaborative and necessary environment for real-time drill-down data investigation.
# Other
To utilize full functionality of this website, test the real-time drill down events by viewing the site from two different clients. Open a web browser and visit the site from two different tabs, sort through the available fields within the line graph visualization, and click on cities and countries to see both tabs update to the current selection. **Total mobile compatibility not supported.** | ## Inspiration
We wanted to be able to allow people to understand the news they read in context because often times, we ourselves will read about events happening on the other side of the globe, but we have no idea where it is. So we wanted a way to visualize the news along with it's place in the world.
## What it does
Visualize the news as it happens in real-time, all around the world. Each day, GLOBEal aggregates news and geotags it, allowing the news to be experienced in a more lucid and immersive manner. Double click on a location and see what's happening there right now. Look into the past and see how the world shifts as history is made.
## How we built it
We used WebGL's Open Globe Platform to display magnitudes of popularity that were determined by a "pagerank" we made by crawling google ourselves and using WebHose API's. We used python scripts to create these crawlers and API calls, and then populated JSON files. We also used javascript with google maps, here maps, and google news apis in order to allow a user to double click on the globe to see the news from that location.
## Challenges we ran into
Google blocked our IPs because our web crawler made too many queries/second
Our query needs were too many for the free version of WebHose, so we called them and got a deal where they gave us free queries in exchange for attribution. So shout out to [webhose.io](http://webhose.io)!!!
## Accomplishments that we're proud of
Learned how to make web crawlers, how to use javascript/html/css, and developed a partnership with Webhose
Made a cool app!
## What we learned
Javascript, Firebase, webhose, how to survive without sleep
## What's next for GLOBEal News
INTERGALACTIC NEWS!
Work more on timelapse
Faster update times
Tags on globe directly
Click through mouse rather than camera ray | ## Inspiration
This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures.
## What it does
Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics.
Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database.
## How we built it
We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack.
## Challenges we ran into
Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected.
Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code.
## Accomplishments that we're proud of
Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges.
## What we learned
We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run.
## What's next for Supermaritan
In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future. | partial |
## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change. | ## Inspiration
Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate.
## What it does
Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user.
## How we built it
ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI
## Challenges we ran into
Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen
## Accomplishments that we're proud of
It works as intended.
## What we learned
We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours.
## What's next for SeerAR
Expand to Apple watch and Android devices;
Improve the accuracy of object detection and recognition;
Connect with Firebase and Google cloud APIs; | ## Inspiration
This weekend at Deltahacks we were challenged to hack for change and the betterment of our communities. The Innovation Factory Hamilton challenged the competitors to look to the future and help build a more connected city that is better enabled to serve it's inhabitants. We used this as inspiration for building our application. It has the potential to save lives by cutting down on response time for dangerous situations such as a car crash or an active threat.
This was our chance to "imagineer" a part of the future
## What it does
Smart Response is made to minimize the overhead time notifying first responders when there has been an incident that is potentially life-threatening. Using the cityIQ API we connected to the Hamilton cityIQ nodes to receive real-time audio and video from nodes around the city. Using the audio data, we trained and implemented a model using Python to detect when there has been a significant event and then notify first responders or the public of a threat. The application would then geotag the location of the incident with a real-time image and classify it (car crash, gunshot etc.) based on the audio properties. First responders can view the image to determine the severity of the incident and determine what actions to take next.
## How we built it
We built this system using many technologies.
Front-end app:
React Native and Apple Maps
Back-end:
Node.js + express, Python, Math and machine learning libraries such as librosa, openCV, matplolib, scikit, tensorFlow; and finally PostgreSQL for the database
API:
We used the cityIQ api to get live audio and images of Hamilton.
## Challenges we ran into
We ran into many challenges over the course of the project. Connecting to the cityIQ API was a challenge we faced at the beginning of the project, but we persevered and were able to receive data from the node. Later building and training a model for classifying mel spectrogram images was a challenge as it was our first time using Machine Learning and the Python libraries associated with it. Also building digital filters to help remove most of the ambient noise present in the audio data was a challenge as it was more of a trial and error to receive that data and determine a filter that worked best for filtering noise but still being able to classify different sounds.
## Accomplishments that we're proud of
Every group member contributed and learned lots for such a short period of time. It was challenging to stay focused when facing many issues along the way, but we are all proud for staying the course and building the platform. We are proud of the fact that what we built has the potential to impact people on a day to day basis.
## What we learned
Our group stepped out of our comfort zone and all learned something new. It was our first time developing an iOS app in react-native as well as use iOS Map API. We also learned the ML and signal processing libraries in Python.
## What's next for Smart Response
Expand the number of connected nodes for serving the community. Also use the nodes to obtain more training data for the model as well as training it with other common urban noise data to accurately analyze a larger range of disturbances. | winning |
## Inspiration
When we heard about using food as a means of love and connection from Otsuka x VALUENEX’s Opening Ceremony presentation, our team was instantly inspired to create something that would connect Asian American Gen Z with our cultural roots and immigrant parents. Recently, there has been a surge of instant Asian food in American grocery stores. However, the love that exudes out of our mother’s piping hot dishes is irreplaceable, which is why it’s important for us, the loneliest demographic in the U.S., to cherish our immigrant parents’ traditional recipes. As Asian American Gen Z ourselves, we often fear losing out on beloved cultural dishes, as our parents have recipes ingrained in them out of years of repetition and thus, neglected documenting these precious recipes. As a result, many of us don’t have access to recreating these traditional dishes, so we wanted to create a web application that encourages sharing of traditional, cultural recipes from our immigrant parents to Asian American Gen Z. We hope that this will reinforce cross-generational relationships, alleviate feelings of disconnect and loneliness (especially in immigrant families), and preserve memories and traditions.
## What it does
Through this web application, users have the option to browse through previews of traditional Asian recipes, posted by Asian or Asian American parents, featured on the landing page. If choosing to browse through, users can filter (by culture) through recipes to get closer to finding their perfect dish that reminds them of home. In the previews of the dishes, users will find the difficulty of the dish (via the number of knives – greater is more difficult), the cultural type of dish, and will also have the option to favorite/save a dish. Once they click on the preview of a dish, they will be greeted by an expanded version of the recipe, featuring the name and image of the dish, ingredients, and instructions on how to prepare and cook this dish. For users that want to add recipes to *yumma*, they can utilize a modal box and input various details about the dish. Additionally, users can also supplement their recipes with stories about the meaning behind each dish, sparking warm memories that will last forever.
## How we built it
We built *yumma* using ReactJS as our frontend, Convex as our backend (made easy!), Material UI for the modal component, CSS for styling, GitHub to manage our version set, a lot of helpful tips and guidance from mentors and sponsors (♡), a lot of hydration from Pocari Sweat (♡), and a lot of love from puppies (♡).
## Challenges we ran into
Since we were all relatively beginners in programming, we initially struggled with simply being able to bring our ideas to life through successful, bug-free implementation. We turned to a lot of experienced React mentors and sponsors (shoutout to Convex) for assistance in debugging. We truly believe that learning from such experienced and friendly individuals was one of the biggest and most valuable takeaways from this hackathon. We additionally struggled with styling because we were incredibly ambitious with our design and wanted to create a high-fidelity functioning app, however HTML/CSS styling can take large amounts of time when you barely know what a flex box is. Additionally, we also struggled heavily with getting our app to function due to one of its main features being in a popup menu (Modal from material UI). We worked around this by creating an extra button in order for us to accomplish the functionality we needed.
## Accomplishments that we're proud of
This is all of our first hackathon! All of us also only recently started getting into app development, and each has around a year or less of experience–so this was kind of a big deal to each of us. We were excitedly anticipating the challenge of starting something new from the ground up. While we were not expecting to even be able to submit a working app, we ended up accomplishing some of our key functionality and creating high fidelity designs. Not only that, but each and every one of us got to explore interests we didn’t even know we had. We are not only proud of our hard work in actually making this app come to fruition, but that we were all so open to putting ourselves out of our comfort zone and realizing our passions for these new endeavors. We tried new tools, practiced new skills, and pushed our necks to the most physical strain they could handle. Another accomplishment that we were proud of is simply the fact that we never gave up. It could have been very easy to shut our laptops and run around the Main Quadrangle, but our personal ties and passion for this project kept us going.
## What we learned
On the technical side, Erin and Kaylee learned how to use Convex for the first time (woo!) and learned how to work with components they never knew could exist, while Megan tried her hand for the first time at React and CSS while coming up with some stellar wireframes. Galen was a double threat, going back to her roots as a designer while helping us develop our display component. Beyond those skills, our team was able to connect with some of the company sponsors and reinvigorate our passions on why we chose to go down the path of technology and development in the first place. We also learned more about ourselves–our interests, our strengths, and our ability to connect with each other through this unique struggle.
## What's next for yumma
Adding the option to upload private recipes that can only be visible to you and any other user you invite to view it (so that your Ba Ngoai–grandma’s—recipes stay a family secret!)
Adding more dropdown features to the input fields so that some will be easier and quicker to use
A messaging feature where you can talk to other users and connect with them, so that cooking meetups can happen and you can share this part of your identity with others
Allowing users to upload photos of what they make from recipes they make and post them, where the most recent of photos for each recipe will be displayed as part of a carousel on each recipe component.
An ingredients list that users can edit to keep track of things they want to grocery shop for while browsing | ## Inspiration
We love to cook for and eat with our friends and family! Sometimes though, we need to accommodate certain dietary restrictions to make sure everyone can enjoy the food. What changes need to be made? Will it be a huge change? Will it still taste good? So much research and thought goes into this, and we all felt as though an easy resource was needed to help ease the planning of our cooking and make sure everyone can safely enjoy our tasty recipes!
## What is Let Them Cook?
First, a user selects the specific dietary restrictions they want to accommodate. Next, a user will copy the list of ingredients that are normally used in their recipe into a text prompt. Then, with the push of a button, our app cooks up a modified, just as tasty, and perfectly accommodating recipe, along with expert commentary from our "chef"! Our "chef" also provides suggestions for other similar recipes which also accomidate the user's specific needs.
## How was Let Them Cook built?
We used React to build up our user interface. On the main page, we implemented a rich text editor from TinyMCE to serve as our text input, allong with various applicable plugins to make the user input experience as seamless as possible.
Our backend is Python based. We set up API responses using FastAPI. Once the front end posts the given recipe, our backend passes the recipe ingredients and dietary restrictions into a fine-tuned large language model - specifically GPT4.
Our LLM had to be fine-tuned using a combination of provided context, hyper-parameter adjustment, and prompt engineering. We modified its responses with a focus on both dietary restrictions knowledge and specific output formatting.
The prompt engineering concepts we employed to receive the most optimal outputs were n-shot prompting, chain-of-thought (CoT) prompting, and generated knowledge prompting.
## UI/UX
### User Personas

*We build some user personas to help us better understand what needs our application could fulfil*
### User Flow

*The user flow was made to help us determine the necessary functionality we wanted to implement to make this application useful*
### Lo-fi Prototypes

*These lo-fi mockups were made to determine what layout we would present to the user to use our primary functionality*
### Hi-fi Prototypes
 
*Here we finalized the styling choice of a blue and yellow gradient, and we started planning for incorporating our extra feature as well - the recipe recomendations*
## Engineering

Frontend: React, JS, HTML, CSS, TinyMCE, Vite
Backend: FastAPI, Python
LLM: GPT4
Database: Zilliz
Hosting: Vercel (frontend), Render (backend)
## Challenges we ran into
### Frontend Challenges
Our recipe modification service is particularly sensitive to the format of the user-provided ingredients and dietary restrictions. This put the responsibility of vetting user input onto the frontend. We had to design multiple mechanisms to sanitize inputs before sending them to our API for further pre-processing. However, we also wanted to make sure that recipes were still readable by the humans who inputted them. Using the TinyMCE editor solved this problem effortlessly as it allowed us to display text in the format it was pasted, while simultaneously allowing our application to access a "raw", unformatted version of the text.
To display our modified recipe, we had to brainstorm the best ways to highlight any substitutions we made. We tried multiple different approaches including some pre-built components online. In the end, we decided to build our own custom component to render substitutions from the formatting returned by the backend.
We also had to design a user flow that would provide feedback while users wait for a response from our backend. This materialized in the form of an interim loading screen with a moving GIF indicating that our application had not timed out. This loading screen is dynamic, and automatically re-routes users to the relevant pages upon hearing back from our API.
### Backend Challenges
The biggest challenge we ran into was selecting a LLM that could produce consistent results based off different input recipes and prompt engineering. We started out with Together.AI, but found that it was inconsistent in formatting and generating valid allergy substitutions. After trying out other open-source LLMs, we found that they also produced undesirable results. Eventually, we compromised with GPT-4, which could produce the results we wanted after some prompt engineering; however, it it is not a free service.
Another challenge was with the database. After implementing the schema and functionality, we realized that we partitioned our design off of the incorrect data field. To finish our project on time, we had to store more values into our database in order for similarity search to still be implemented.

## Takeaways
### Accomplishments that we're proud of
From coming in with no knowledge, we were able to build a full stack web applications making use of the latest offerings in the LLM space. We experimented with prompt engineering, vector databases, similarity search, UI/UX design, and more to create a polished product. Not only are we able to demonstrate our learnings through our own devices, but we are also able to share them out with the world by deploying our application.
**All that said**, our proudest accomplishment was creating a service which can provide significant help to many in a common everyday experience: cooking and enjoying food with friends and family.
### What we learned
For some of us on the team, this entire project was built in technologies that were unfamiliar. Some of us had little experience with React or FastAPI so that was something the more experienced members got to teach on the job.
One of the concepts we spent the most time learning was about prompt engineering.
We also learned about the importance of branching on our repository as we had to build 3 different components to our project all at the same time on the same application.
Lastly, we spent a good chunk of time learning how to implement and improve our similarity search.
### What's next for Let Them Cook
We're very satisfied with the MVP we built this weekend, but we know there is far more work to be done.
First, we would like to deploy our recipe similarity service currently working on a local environment to our production environment. We would also like to incorporate a ranking system that will allow our LLM to take in crowdsourced user feedback in generating recipe substitutions.
Additionally, we would like to enhance our recipe substitution service to make use of recipe steps rather than solely *ingredients*. We believe that the added context of how ingredients come together will result in even higher quality substitutions.
Finally, we hope to add an option for users to directly insert a recipe URL rather than copy-and-pasting the ingredients. We would write another service to scrape the site and extract the information a user would previously paste. | ## Inspiration
Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible.
## What it does
The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner.
## How we built it
Frontend: Vue.js, tailwindCSS
Backend: Python Flask, Google Vision API, CalorieNinja API
## Challenges we ran into
As we are many first-year students, learning while developing a product within 24h is a big challenge.
## Accomplishments that we're proud of
We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals.
## What we learned
As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more.
## What's next for McHacks
* Calculate sum of calories, etc.
* Use image processing to estimate serving sizes
* Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc.
* Collaborate with local restaurant businesses | partial |
# FaceConnect
##### Never lose a connection again! Connect with anyone, any wallet, and send transactions through an image of one's face!
## Inspiration
Have you ever met someone and instantly connected with them, only to realize you forgot to exchange contact information? Or, even worse, you have someone's contact but they are outdated and you have no way of contacting them? I certainly have.
This past week, I was going through some old photos and stumbled upon one from a Grade 5 Summer Camp. It was my first summer camp experience, I was super nervous going in but I had an incredible time with a friend I met there. We did everything together and it was one of my favorite memories from childhood. But there was a catch – I never got their contact, and I'd completely forgotten their name since it's been so long. All I had was a physical photo of us laughing together, and it felt like I'd lost a precious connection forever.
This dilemma got me thinking. The problem of losing touch with people we've shared fantastic moments with is all too common, whether it's at a hackathon, a party, a networking event, or a summer camp. So, I set out to tackle this issue at Hack The Valley.
## What it does
That's why I created FaceConnect, a Discord bot that rekindles these connections using facial recognition. With FaceConnect, you can send connection requests to people as long as you have a picture of their face.
But that's not all. FaceConnect also allows you to view account information and send transactions if you have a friend's face. If you owe your friend money, you can simply use the "transaction" command to complete the payment.
Or even if you find someone's wallet or driver's license, you can send a reach out to them just with their ID photo!
Imagine a world where you never lose contact with your favorite people again.
Join me in a future where no connections are lost. Welcome to FaceConnect!
## Demos
Mobile Registration and Connection Flow (Registering and Detecting my own face!):
<https://github.com/WilliamUW/HackTheValley/assets/25058545/d6fc22ae-b257-4810-a209-12e368128268>
Desktop Connection Flow (Obama + Trump + Me as examples):
<https://github.com/WilliamUW/HackTheValley/assets/25058545/e27ff4e8-984b-42dd-b836-584bc6e13611>
## How I built it
FaceConnect is built on a diverse technology stack:
1. **Computer Vision:** I used OpenCV and the Dlib C++ Library for facial biometric encoding and recognition.
2. **Vector Embeddings:** ChromaDB and Llama Index were used to create vector embeddings of sponsor documentation.
3. **Document Retrieval:** I utilized Langchain to implement document retrieval from VectorDBs.
4. **Language Model:** OpenAI was employed to process user queries.
5. **Messaging:** Twilio API was integrated to enable SMS notifications for contacting connections.
6. **Discord Integration:** The bot was built using the discord.py library to integrate the user flow into Discord.
7. **Blockchain Technologies:** I integrated Hedera to build a decentralized landing page and user authentication. I also interacted with Flow to facilitate seamless transactions.
## Challenges I ran into
Building FaceConnect presented several challenges:
* **Solo Coding:** As some team members had midterm exams, the project was developed solo. This was both challenging and rewarding as it allowed for experimentation with different technologies.
* **New Technologies:** Working with technologies like ICP, Flow, and Hedera for the first time required a significant learning curve. However, this provided an opportunity to develop custom Language Models (LLMs) trained on sponsor documentation to facilitate the learning process.
* **Biometric Encoding:** It was my first time implementing facial biometric encoding and recognition! Although cool, it required some time to find the right tools to convert a face to a biometric hash and then compare these hashes accurately.
## Accomplishments that I'm proud of
I're proud of several accomplishments:
* **Facial Recognition:** Successfully implementing facial recognition technology, allowing users to connect based on photos.
* **Custom LLMs:** Building custom Language Models trained on sponsor documentation, which significantly aided the learning process for new technologies.
* **Real-World Application:** Developing a solution that addresses a common real-world problem - staying in touch with people.
## What I learned
Throughout this hackathon, I learned a great deal:
* **Technology Stacks:** I gained experience with a wide range of technologies, including computer vision, blockchain, and biometric encoding.
* **Solo Coding:** The experience of solo coding, while initially challenging, allowed for greater freedom and experimentation.
* **Documentation:** Building custom LLMs for various technologies, based on sponsor documentation, proved invaluable for rapid learning!
## What's next for FaceConnect
The future of FaceConnect looks promising:
* **Multiple Faces:** Supporting multiple people in a single photo to enhance the ability to reconnect with groups of friends or acquaintances.
* **Improved Transactions:** Expanding the transaction feature to enable users to pay or transfer funds to multiple people at once.
* **Additional Technologies:** Exploring and integrating new technologies to enhance the platform's capabilities and reach beyond Discord!
### Sponsor Information
ICP Challenge:
I leveraged ICP to build a decentralized landing page and implement user authentication so spammers and bots are blocked from accessing our bot.
Built custom LLM trained on ICP documentation to assist me in learning about ICP and building on ICP for the first time!
I really disliked deploying on Netlify and now that I’ve learned to deploy on ICP, I can’t wait to use it for all my web deployments from now on!
Canister ID: be2us-64aaa-aaaaa-qaabq-cai
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/ICP.md>
Best Use of Hedera:
With FaceConnect, you are able to see your Hedera account info using your face, no need to memorize your public key or search your phone for it anymore!
Allow people to send transactions to people based on face! (Wasn’t able to get it working but I have all the prerequisites to make it work in the future - sender Hedera address, recipient Hedera address).
In the future, to pay someone or a vendor in Hedera, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting!
I also built a custom LLM trained on Hedera documentation to assist me in learning about Hedera and building on Hedera as a beginner!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/hedera.md>
Best Use of Flow
With FaceConnect, to pay someone or a vendor in Flow, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting!
I also built a custom LLM trained on Flow documentation to assist me in learning about Flow and building on Flow as a beginner!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/flow.md>
Georgian AI Challenge Prize
I was inspired by the data sources listed in the document by scraping LinkedIn profile pictures and their faces for obtaining a dataset to test and verify my face recognition model!
I also built a custom LLM trained on Georgian documentation to learn more about the firm!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/GeorgianAI.md>
Best .Tech Domain Name:
FaceCon.tech
Best AI Hack:
Use of AI include:
1. Used Computer Vision with OpenCV and the Dlib C++ Library to implement AI-based facial biometric encoding and recognition.
2. Leveraged ChromaDB and Llama Index to create vector embeddings of sponsor documentation
3. Utilized Langchain to implement document retrieval from VectorDBs
4. Used OpenAI to process user queries for everything Hack the Valley related!
By leveraging AI, FaceConnect has not only addressed a common real-world problem but has also pushed the boundaries of what's possible in terms of human-computer interaction. Its sophisticated AI algorithms and models enable users to connect based on visuals alone, transcending language and other barriers. This innovative use of AI in fostering human connections sets FaceConnect apart as an exceptional candidate for the "Best AI Hack" award.
Best Diversity Hack:
Our project aligns with the Diversity theme by promoting inclusivity and connection across various barriers, including language and disabilities. By enabling people to connect using facial recognition and images, our solution transcends language barriers and empowers individuals who may face challenges related to memory loss, speech, or hearing impairments. It ensures that everyone, regardless of their linguistic or physical abilities, can stay connected and engage with others, contributing to a more diverse and inclusive community where everyone's unique attributes are celebrated and connections are fostered.
Imagine trying to get someone’s contact in Germany, or Thailand, or Ethiopia? Now you can just take a picture!
Best Financial Hack:
FaceConnect is the ideal candidate for "Best Financial Hack" because it revolutionizes the way financial transactions can be conducted in a social context. By seamlessly integrating facial recognition technology with financial transactions, FaceConnect enables users to send and receive payments simply by recognizing the faces of their friends.
This innovation simplifies financial interactions, making it more convenient and secure for users to settle debts, split bills, or pay for services. With the potential to streamline financial processes, FaceConnect offers a fresh perspective on how we handle money within our social circles. This unique approach not only enhances the user experience but also has the potential to disrupt traditional financial systems, making it a standout candidate for the "Best Financial Hack" category. | ## Inspiration
We are tired of being forgotten and not recognized by others for our accomplishments. We built a software and platform that helps others get to know each other better and in a faster way, using technology to bring the world together.
## What it does
Face Konnex identifies people and helps the user identify people, who they are, what they do, and how they can help others.
## How we built it
We built it using Android studio, Java, OpenCV and Android Things
## Challenges we ran into
Programming Android things for the first time. WiFi not working properly, storing the updated location. Display was slow. Java compiler problems.
## Accomplishments that we're proud of.
Facial Recognition Software Successfully working on all Devices, 1. Android Things, 2. Android phones.
Prototype for Konnex Glass Holo Phone.
Working together as a team.
## What we learned
Android Things, IOT,
Advanced our android programming skills
Working better as a team
## What's next for Konnex IOT
Improving Facial Recognition software, identify and connecting users on konnex
Inputting software into Konnex Holo Phone | ## Inspiration
With the excitement of blockchain and the ever growing concerns regarding privacy, we wanted to disrupt one of the largest technology standards yet: Email. Email accounts are mostly centralized and contain highly valuable data, making one small breach, or corrupt act can serious jeopardize millions of people. The solution, lies with the blockchain. Providing encryption and anonymity, with no chance of anyone but you reading your email.
Our technology is named after Soteria, the goddess of safety and salvation, deliverance, and preservation from harm, which we believe perfectly represents our goals and aspirations with this project.
## What it does
First off, is the blockchain and message protocol. Similar to PGP protocol it offers \_ security \_, and \_ anonymity \_, while also **ensuring that messages can never be lost**. On top of that, we built a messenger application loaded with security features, such as our facial recognition access option. The only way to communicate with others is by sharing your 'address' with each other through a convenient QRCode system. This prevents anyone from obtaining a way to contact you without your **full discretion**, goodbye spam/scam email.
## How we built it
First, we built the block chain with a simple Python Flask API interface. The overall protocol is simple and can be built upon by many applications. Next, and all the remained was making an application to take advantage of the block chain. To do so, we built a React-Native mobile messenger app, with quick testing though Expo. The app features key and address generation, which then can be shared through QR codes so we implemented a scan and be scanned flow for engaging in communications, a fully consensual agreement, so that not anyone can message anyone. We then added an extra layer of security by harnessing Microsoft Azures Face API cognitive services with facial recognition. So every time the user opens the app they must scan their face for access, ensuring only the owner can view his messages, if they so desire.
## Challenges we ran into
Our biggest challenge came from the encryption/decryption process that we had to integrate into our mobile application. Since our platform was react native, running testing instances through Expo, we ran into many specific libraries which were not yet supported by the combination of Expo and React. Learning about cryptography and standard practices also played a major role and challenge as total security is hard to find.
## Accomplishments that we're proud of
We are really proud of our blockchain for its simplicity, while taking on a huge challenge. We also really like all the features we managed to pack into our app. None of us had too much React experience but we think we managed to accomplish a lot given the time. We also all came out as good friends still, which is a big plus when we all really like to be right :)
## What we learned
Some of us learned our appreciation for React Native, while some learned the opposite. On top of that we learned so much about security, and cryptography, and furthered our beliefs in the power of decentralization.
## What's next for The Soteria Network
Once we have our main application built we plan to start working on the tokens and distribution. With a bit more work and adoption we will find ourselves in a very possible position to pursue an ICO. This would then enable us to further develop and enhance our protocol and messaging app. We see lots of potential in our creation and believe privacy and consensual communication is an essential factor in our ever increasingly social networking world. | winning |
# HeadlineHound
HeadlineHound is a Typescript-based project that uses natural language processing to summarize news articles for you. With HeadlineHound, you can quickly get the key points of an article without having to read the entire thing. It's perfect for anyone who wants to stay up-to-date with the news but doesn't have the time to read every article in full. Whether you're a busy professional, a student, or just someone who wants to be informed, HeadlineHound is a must-have tool in your arsenal.
## How it Works
HeadlineHound uses natural language processing (NLP) via a fine-tuned ChatGPT to analyze and summarize news articles. It extracts the most relevant sentences and phrases from the article to create a concise summary. The user simply inputs the URL of the article they want summarized, and HeadlineHound does the rest.
## How we built it
We first fine-tuned a ChatGPT model using a dataset containing news articles and their summaries. This involved a lot of trying different datasets, testing various different data cleaning techniques to make our data easier to interpret, and configuring OpenAI LLM in about every way possible :D. Then, after settling on a dataset and model, we fine-tuned the general model with our dataset. Finally, we built this model into our webapp so that we can utilize it to summarize any news article that we pass in. We first take in the news article URL, pass it into an external web scraping API to extract all the article content, and finally feed that into our LLM to summarize the content into a few sentences.
## Challenges we ran into
Our biggest challenge with this project was trying to determine which dataset to use and how much data to train our model on. We ran into a lot of memory issues when trying to train it on very large datasets and this resulted in us having to use less data than we wanted to train it, resulting in summaries that could definitely be improved. Another big challenge that we ran into was determining the best OpenAI model to use for our purposes, and the best method of fine-tuning to apply.
## Accomplishments that we're proud of
We are very proud of the fact that we were able to so quickly learn how to utilize the OpenAI APIs to apply and fine-tune their generalized models to meet our needs. We quickly read through the documentation, played around with the software, and were able to apply it in a way that benefits people. Furthermore, we also developed an entire frontend application to be able to interact with this LLM in an easy way. Finally, we learned how to work together as a team and divide up the work based on our strengths to maximize our efficiency and utilize our time in the best way possible.
## What we learned
We learned a lot about the power of NLP. Natural language processing (NLP) is a fascinating and powerful field, and HeadlineHound is a great example of how we can use LLMs can be used to solve real-world problems. By leveraging these generalized AI models and then fine-tuning them for our purposes, we were able to create a tool that can quickly and accurately summarize news articles. Additionally, we learned that in order to create a useful tool, it's important to understand the needs of the user. With HeadlineHound, we recognized that people are increasingly time-poor and want to be able to stay informed about the news without having to spend their precious time reading articles. By creating a tool that meets this need, we were able to create something that people saw value in.
## What's next for Headline Hound
Here are a few potential next steps for the HeadlineHound project:
Improve the summarization algorithm: While HeadlineHound's summarization algorithm is effective, there is always room for improvement. One potential area of focus could be to improve the algorithm's ability to identify the most important sentences in an article, or to better understand the context of the article in order to generate a more accurate summary.
Add support for more news sources: HeadlineHound currently supports summarizing articles from a wide range of news sources, but there are always more sources to add. Adding support for more sources would make HeadlineHound even more useful to a wider audience.
Add more features: While the current version of HeadlineHound is simple and effective, there are always more features that could be added. For example, adding the ability to search for articles by keyword could make the tool even more useful to users. | ## Inspiration
Why is science inaccessibility such a problem? Every year, there are over two million scientific research papers published globally. This represents a staggering amount of knowledge, innovation, and progress. Yet, around 91.5% of research articles are never accessed by the wider public. Even among those who have access, the dense and technical language used in academic papers often serves as a barrier, deterring readers from engaging with groundbreaking discoveries.
We recognized the urgent need to bridge the gap between academia and the general public. Our mission is to make scientific knowledge accessible to everyone, regardless of their background or expertise. Open insightful.ly: You are presented with 3 top headlines, which are summarized with cohere as “news headlines” for each research article. You also see accompanying photos that are GPT produced.
## What it does
Our goal is that by summarizing long research articles that is difficult to read into headlines that attract people (who can then go on to read the full article or a summarized version), people will be more encouraged to find out more about the scientific world around them. It is also a great way for talented researchers who're writing these articles to gain publicity/recognition for their research. To make the website attractive, we'll be generating AI generated images based on each article using OpenAI's DALL-E 2 API. The site is built using Python, HTML, CSS using Pycharm.
## How we built it
1. Content Aggregation: We start by aggregating peer reviewed research papers from google scholar.
2. Cohere Summarizer API: To summarize both the essence of these papers and to generate a news headline, we used the Cohere Summarizer API.
3. User-Friendly Interface: Building using Python, we designed a scroller web app, inspired by Twitter and insta and the reliability of respected news outlets like The New York Times. Users can explore topics on 'you, 'hot,' and 'explore' pages, just like they would on their favorite news website while being captivated with the social media type attention grabbing scroll app.
4. AI-Generated Visuals: To enhance the user experience, we integrated OpenAI's DALL-E 2 API, which generates images based on each research article. These visuals help users quickly grasp the essence of the content.
5. User Engagement: We introduced a liking system, allowing users to endorse articles they find valuable. Top-liked papers are suggested more frequently, promoting quality content.
## Challenges we ran into
During the development of Insightful.ly, we faced several challenges. These included data aggregation complexities, integrating APIs effectively, and ensuring a seamless user experience. We also encountered issues related to data preprocessing and visualization. Overcoming these hurdles required creative problem-solving and technical agility.
## Accomplishments that we're proud of
We take immense pride in several accomplishments. First, we successfully created a platform that makes reading long research articles engaging and accessible. Second, our liking system encourages user engagement and recognition for researchers. Third, we integrated advanced AI technologies like Cohere Summarizer and OpenAI's DALL-E 2, enhancing our platform's capabilities. Lastly, building a user-friendly web app inspired by social media platforms and respected news outlets has been a significant achievement
## What we learned
Our journey with Insightful.ly has been a profound learning experience. We gained expertise in data aggregation, API integration, web development, user engagement strategies, and problem-solving. We also honed our collaboration skills and became proficient in version control. Most importantly, we deepened our understanding of the impact of our mission to make science and knowledge accessible.
## What's next for Insightful.ly
In the next phase of Insightful.ly, our primary focus is on enriching the user experience and expanding our content. We're committed to making knowledge accessible to an even wider audience by incorporating research articles from diverse domains, enhancing our user interface, and optimizing our platform for mobile accessibility. Furthermore, we aim to harness the latest AI advancements to improve summarization accuracy and generate captivating visuals, ensuring that the content we deliver remains at the forefront of innovation. Beyond technical improvements, we're building a vibrant community of knowledge seekers, facilitating user engagement through features like discussion forums and expert Q&A sessions. Our journey is marked by collaboration, partnerships, and measuring our impact on knowledge accessibility. This is just the beginning of our mission to make knowledge more accessible and insightful for all. | ## What it does
MusiCrowd is an interactive democratic music streaming service that allows individuals to vote on what songs they want to play next (i.e. if three people added three different songs to the queue the song at the top of the queue will be the song with the most upvotes). This system was built with the intentions of allowing entertainment venues (pubs, restaurants, socials, etc.) to be inclusive allowing everyone to interact with the entertainment portion of the venue.
The system has administrators of rooms and users in the rooms. These administrators host a room where users can join from a code to start a queue. The administrator is able to play, pause, skip, and delete and songs they wish. Users are able to choose a song to add to the queue and upvote, downvote, or have no vote on a song in queue.
## How we built it
Our team used Node.js with express to write a server, REST API, and attach to a Mongo database. The MusiCrowd application first authorizes with the Spotify API, then queries music and controls playback through the Spotify Web SDK. The backend of the app was used primarily to the serve the site and hold an internal song queue, which is exposed to the front-end through various endpoints.
The front end of the app was written in Javascript with React.js. The web app has two main modes, user and admin. As an admin, you can create a ‘room’, administrate the song queue, and control song playback. As a user, you can join a ‘room’, add song suggestions to the queue, and upvote / downvote others suggestions. Multiple rooms can be active simultaneously, and each room continuously polls its respective queue, rendering a sorted list of the queued songs, sorted from most to least popular. When a song ends, the internal queue pops the next song off the queue (the song with the most votes), and sends a request to Spotify to play the song. A QR code reader was added to allow for easy access to active rooms. Users can point their phone camera at the code to link directly to the room.
## Challenges we ran into
* Deploying the server and front-end application, and getting both sides to communicate properly.
* React state mechanisms, particularly managing all possible voting states from multiple users simultaneously.
* React search boxes.
* Familiarizing ourselves with the Spotify API.
* Allowing anyone to query Spotify search results and add song suggestions / vote without authenticating through the site.
## Accomplishments that we're proud of
Our team is extremely proud of the MusiCrowd final product. We were able to build everything we originally planned and more. The following include accomplishments we are most proud of:
* An internal queue and voting system
* Frontloading the development & working hard throughout the hackathon > 24 hours of coding
* A live deployed application accessible by anyone
* Learning Node.js
## What we learned
Garrett learned javascript :) We learned all about React, Node.js, the Spotify API, web app deployment, managing a data queue and voting system, web app authentication, and so so much more.
## What's next for Musicrowd
* Authenticate and secure routes
* Add IP/device tracking to disable multiple votes for browser refresh
* Drop songs that are less than a certain threshold of votes or votes that are active
* Allow tv mode to have current song information and display upcoming queue with current number of votes | losing |
## Submission Links:
YouTube:
* <https://www.youtube.com/watch?v=9eHJ7draeAY&feature=youtu.be>
GitHub:
* Front-End: <https://github.com/manfredxu99/nwhacks-fe>
* Back-End: <https://github.com/manfredxu99/nwhacks-be>
## Inspiration
Imagine living in a world where you can feel safe when going to a specific restaurant or going to a public space. Covid map helps you see which areas and restaurants have the most people who have been in close contact with covid patients. Other apps only tell you after someone has been confirmed suspected with COVID. However, with covid map you can tell in advance whether or not you should go to a specific area.
## What it does
In COVIDMAP you can look up the location you are thinking of visiting and get an up to date report on how many confirmed cases have visited the location in the past 3 days. With the colour codes indicating the severity of the covid cases in the area, COVID map is an easy and intuitive way to find out whether or not a grocery store or public area is safe to visit.
## How I built it
We started by building the framework. I built it using React Native as front-end, ExpressJS backend server and Google Cloud SQL server.
## Challenges I ran into
Maintaining proper communication between front-end and back-end, and writing stored procedures in database for advanced database SQL queries
## Accomplishments that I'm proud of
We are honoured having the opportunity to contribute well to the one of the main health and safety concerns, by creating an app that provide our fellow citizens to reduce the worries and concerns towards being exposed to COVID patients.
Moreover, in technical aspects, we have successfully maintained the front-end to back-end communication, as our app successfully fetches and stores data properly in this short time span of 24 hours.
## What I learned
We have learnt that creating a complete app within 24-hours is fairly challenging. As we needed to distribute time well to brainstorm great app ideas, design and implement UI, manage data, etc. This hackathon also further enhanced my ability to practice teamwork.
## What's next for COVIDMAP
We hope to implement this app locally in Vancouver to test out the usability of this project. Eventually we wish to help hotspot cities reduce their cases. | ## Inspiration
Ricky (Li Qi), one of our team members, has a girlfriend who lives with her grandma. Because it was hard to keep track of when both of them had done activities, they had to put extra care when deciding to meet, because that would be a high threat to his girlfriend's grandma. He proposed to our team to create this web-app so that both families could have a higher ease of tracking whatever activity they were doing so that the couple could meet safety.
## What it does
Our web-app gathers people by putting them in a pod, which would have a maximum size of 10 people. It would then analyze each person's activities and return a risk meter for each person. This meter would then be used to see if that pod could meet on that day safely, according to each person's exposure to Covid.
## How we built it
Our front-end and all of its animations are in vanilla React.js, with the exception of CircularProgression meters from the material-ui library. The front-end uses Redux for state management. The account logging mechanism uses a Google or Facebook passport. The back-end is done in express.js with a PostgreSQL database. Our web-app will be hosted on Heroku.
## Challenges we ran into
We are still amateur developers, and we had some difficulty loading data with axios before the react component was loaded. Our solution: using react-queries and useState to know when the component had to load or not.
## Accomplishments that we're proud of
We are absolutely proud of our UI and it's graphic design. Our front-end developer spent a lot of time smoothing transitions and change of pages, etc.
We are also proud of the idea of our project, as it could actually help a lot of individuals gradually reconnect, while doing so safely and following Covid-19 guidelines.
## What we learned
We learned during this hackathon that workflow management is actually important. While we all did our part and worked very hard, our efforts were not always coordinated, and we ended having a lot of bugs in the communication between out front-end and our back-end.
## What's next for Peapod
We are planning to add new feature to Peapod in the future. We plan to add a calendar of future events and what will be showing this person's risk level on different days in the future. We plan to also bring this feature to pods so that they can plan meeting in the future more effectively. Since we want Peapod to be accessible, we are also planning on porting this app to native devices as well as create a PWA version! | ## Inspiration
We built an AI-powered physical trainer/therapist that provides real-time feedback and companionship as you exercise.
With the rise of digitization, people are spending more time indoors, leading to increasing trends of obesity and inactivity. We wanted to make it easier for people to get into health and fitness, ultimately improving lives by combating these the downsides of these trends. Our team built an AI-powered personal trainer that provides real-time feedback on exercise form using computer vision to analyze body movements. By leveraging state-of-the-art technologies, we aim to bring accessible and personalized fitness coaching to those who might feel isolated or have busy schedules, encouraging a more active lifestyle where it can otherwise be intimidating.
## What it does
Our AI personal trainer is a web application compatible with laptops equipped with webcams, designed to lower the barriers to fitness. When a user performs an exercise, the AI analyzes their movements in real-time using a pre-trained deep learning model. It provides immediate feedback in both textual and visual formats, correcting form and offering tips for improvement. The system tracks progress over time, offering personalized workout recommendations and gradually increasing difficulty based on performance. With voice guidance included, users receive tailored fitness coaching from anywhere, empowering them to stay consistent in their journey and helping to combat inactivity and lower the barriers of entry to the great world of fitness.
## How we built it
To create a solution that makes fitness more approachable, we focused on three main components:
Computer Vision Model: We utilized MediaPipe and its Pose Landmarks to detect and analyze users' body movements during exercises. MediaPipe's lightweight framework allowed us to efficiently assess posture and angles in real-time, which is crucial for providing immediate form correction and ensuring effective workouts.
Audio Interface: We initially planned to integrate OpenAI’s real-time API for seamless text-to-speech and speech-to-text capabilities, enhancing user interaction. However, due to time constraints with the newly released documentation, we implemented a hybrid solution using the Vosk API for speech recognition. While this approach introduced slightly higher latency, it enabled us to provide real-time auditory feedback, making the experience more engaging and accessible.
User Interface: The front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, manages communication between the AI model, audio interface, and user data. This setup allows the machine learning models to run efficiently, providing smooth real-time feedback without the need for powerful hardware.
On the user interface side, the front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, handles the communication between the AI model, audio interface, and the user's data.
## Challenges we ran into
One of the major challenges was integrating the real-time audio interface. We initially planned to use OpenAI’s real-time API, but due to the recent release of the documentation, we didn’t have enough time to fully implement it. This led us to use the Vosk API in conjunction with our system, which introduced increased codebase complexity in handling real-time feedback.
## Accomplishments that we're proud of
We're proud to have developed a functional AI personal trainer that combines computer vision and audio feedback to lower the barriers to fitness. Despite technical hurdles, we created a platform that can help people improve their health by making professional fitness guidance more accessible. Our application runs smoothly on various devices, making it easier for people to incorporate exercise into their daily lives and address the challenges of obesity and inactivity.
## What we learned
Through this project, we learned that sometimes you need to take a "back door" approach when the original plan doesn’t go as expected. Our experience with OpenAI’s real-time API taught us that even with exciting new technologies, there can be limitations or time constraints that require alternative solutions. In this case, we had to pivot to using the Vosk API alongside our real-time system, which, while not ideal, allowed us to continue forward. This experience reinforced the importance of flexibility and problem-solving when working on complex, innovative projects.
## What's next for AI Personal Trainer
Looking ahead, we plan to push the limits of the OpenAI real-time API to enhance performance and reduce latency, further improving the user experience. We aim to expand our exercise library and refine our feedback mechanisms to cater to users of all fitness levels. Developing a mobile app is also on our roadmap, increasing accessibility and convenience. Ultimately, we hope to collaborate with fitness professionals to validate and enhance our AI personal trainer, making it a reliable tool that encourages more people to lead healthier, active lives. | losing |
REFERENCES/DATABASE USED:
1. Differentiation Between Viral vs. Bacterial Infections:
For machine-learning (SVM) to differentiate between viral and bacterial infections, we used MicroArray dataset GLP96 from:
Ramilo O, Allman W, Chung W, Mejias A et al. Gene expression patterns in blood leukocytes discriminate patients with
acute infections. Blood 2007 Mar 1;109(5):2066-77. PMID: 17105821
We used GEO2R to compare the bacterial infected datasets and viral infected datasets to get the top 100 differentially expressed genes. We then used the expression of these top differentially expressed genes for SVM.
We used sklearn to train our SVM. We do a logarithmic grid search from 10^-3 to 10^3 for tuning parameters C and gamma. We train using accuracy as our metric and by performing 5 fold cross validation in order to choose the best parameters.
Our final parameters were C=1 and gamma=0.001. Our five fold cross validation had an accuracy of 90.8% and our test set had a 100% accuracy.
For gene set enrichment analysis, we used GSEA to run enrichment tests against C7: immunological signatures (189 gene sets)
of GSEA's MSigDB database to compare patient sample against healthy samples.
1. Mapping bacterial genome to find antibiotic-resistance conferring genes
For our analysis, we used the Antibiotic Resistance Genes Database from:
Liu B, Pop M. ARDB-Antibiotic Resistance Genes Database. Nucleic Acids Res. 2009 Jan;37(Database issue):D443-7
1. For the web front end, we used a python django server. Our pages were built using the bootstrap styling framework, and analytics were provided using the highcharts.js plotting library. | ## Inspiration
I was interested in exploring the health datasets given by John Snow Labs in order to give users the ability to explore meaningful datasets. The datasets selected were Vaccination Data Immunization Kindergarten Students 2011 to 2014, Mammography Data from Breast Cancer Surveillance Consortium, 2014 State Occupational Employment and Wage Estimate dataset from the Bureau of Labor Statistics, and Mental Health Data from the CDC and Behavioral Risk Factor Surveillance System.
Vaccinations are crucial to ending health diseases as well as deter mortality and morbidity rates and has the potential to save future generations from serious disease. By visualization the dataset, users are able to better understand the current state of vaccinations and help to create policies to improve struggling states. Mammography is equally important in preventing health risks. Mental health is an important fact in determining the well-being of a state. Similarly, the visualization allows users to better understand correlations between preventative steps and cancerous outcomes.
## What it does
The data visualization allows users to observe possible impacts of preventative steps on breast cancer formation and the current state of immunizations for kindergarten students and mental health in the US. Using this data, we can analyze specific state and national trends and look at interesting relationships they may have on one another.
## How I built it
The web application's backend used node and express. The data visualizations and data processing used d3. Specific d3 packages allowed to map and spatial visualizations using network/node analysis. D3 allowed for interactivity between the user and visualization, which allows for more sophisticated exploration of the datasets.
## Challenges I ran into
Searching through the John Snow Labs datasets required a lot of time. Further processing and finding the best way to visualize the data took much of my time as some datasets included over 40,000 entries! Working d3 also took awhile to understand.
## Accomplishments that I'm proud of
In the end, I created a working prototype that visualizes significant data that may help a user understand a complex dataset. I learned a lot more about d3 and building effective data visualizations in a very constrained amount of time.
## What I learned
I learned a lot more about d3 and building effective data visualizations in a very constrained amount of time.
## What's next for Finder
I hope to add more interaction for users, such as allowing them to upload their own dataset to explore their data. | ## Inspiration
We are a team of goofy engineers and we love making people laugh. As Western students (and a stray Waterloo engineer), we believe it's important to have a good time. We wanted to make this game to give people a reason to make funny faces more often.
## What it does
We use OpenCV to analyze webcam input and initiate signals using winks and blinks. These signals control a game that we coded using PyGame.
See it in action here: <https://youtu.be/3ye2gEP1TIc>
## How to get set up
##### Prerequisites
* Python 2.7
* A webcam
* OpenCV
1. [Clone this repository on Github](https://github.com/sarwhelan/hack-the-6ix)
2. Open command line
3. Navigate to working directory
4. Run `python maybe-a-game.py`
## How to play
**SHOW ME WHAT YOU GOT**
You are playing as Mr. Poopybutthole who is trying to tame some wild GMO pineapples. Dodge the island fruit and get the heck out of there!
##### Controls
* Wink left to move left
* Wink right to move right
* Blink to jump
**It's time to get SssSSsssSSSssshwinky!!!**
## How we built it
Used haar cascades to detect faces and eyes. When users' eyes disappear, we can detect a wink or blink and use this to control Mr. Poopybutthole movements.
## Challenges we ran into
* This was the first game any of us have ever built, and it was our first time using Pygame! Inveitably, we ran into some pretty hilarious mistakes which you can see in the gallery.
* Merging the different pieces of code was by-far the biggest challenge. Perhaps merging shorter segments more frequently could have alleviated this.
## Accomplishments that we're proud of
* We had a "pineapple breakthrough" where we realized how much more fun we could make our game by including this fun fruit.
## What we learned
* It takes a lot of thought, time and patience to make a game look half decent. We have a lot more respect for game developers now.
## What's next for ShwinkySwhink
We want to get better at recognizing movements. It would be cool to expand our game to be a stand-up dance game! We are also looking forward to making more hacky hackeronis to hack some smiles in the future. | partial |
## Inspiration
## What it does
Buddy is your personal assistant that monitors your device's screen-on time so you can better understand your phone usage statistics. These newfound insights will help you to optimize your time and understand your usage patterns. Set a maximum screen-on time goal for yourself and your personal assistant, Buddy, will become happier the less you use your device and become more unhappy as you get close to or exceed the limit you set for yourself.
## How we built it
We built an android app using the NativeScript framework while coding in JavaScript. Our graphics were created through the use of Photoshop.
## Challenges we ran into
As it was our team's first time creating an android app, we went through much trial and error. Learning Google Firebase and the NativeScript framework was a welcomed challenge. We also faced some technical limitations two of our 3 computers were unable to emulate the app. Thus our ability to do testing was limited and as a result slowed our overall progress.
## Accomplishments that we're proud of
## What we learned
We were strangers when we met and had different backgrounds of experience. The three of us were separately experienced in either front-end, back-end, and UI/UX which made for a very interesting team dynamic. Handling better CSS, using better classes, and utilizing frameworks such as NativeScript were all things we learned from each other (and the internet).
## What's next for Buddy
Buddy will be venturing on to increase the depth of our phone usage analysis by not only including screen-on time but also usage by app. We also highly value user experience so we are looking into creating more customizable options such as different species of pets, colours of fur, and much more. An IOS app is also being considered as the next step for our product. | 
## Inspiration
Two weeks ago you attended an event and have met some wonderful people to help get through the event, each one of you exchanged contact information and hope to keep in touch with each other. Neither one of you contacted each other and eventually lost contact with each other. A potentially valuable friendship is lost due to neither party taking the initiative to talk to each other before. This is where *UpLync* comes to the rescue, a mobile app that is able to ease the connectivity with lost contacts.
## What it does?
It helps connect to people you have not been in touch for a while, the mobile application also reminds you have not been contacting a certain individual in some time. In addition, it has a word prediction function that allows users to send a simple greeting message using the gestures of a finger.
## Building process
We used mainly react-native to build the app, we use this javascript framework because it has cross platform functionality. Facebook has made a detailed documented tutorial at [link](https://facebook.github.io/react-native) and also [link](http://nativebase.io/) for easier cross-platform coding, we started with
* Designing a user interface that can be easily coded for both iOS and Android
* Functionality of the Lazy Typer
* Touch up with color scheme
* Coming up with a name for the application
* Designing a Logo
## Challenges we ran into
* non of the team members know each other before the event
* Coding in a new environment
* Was to come up with a simple UI that is easy on the eyes
* Keeping people connected through a mobile app
* Reduce the time taken to craft a message and send
## Accomplishments that we're proud of
* Manage to create a product with React-Native for the first time
* We are able to pick out a smooth font and colour scheme to polish up the UI
* Enabling push notifications to remind the user to reply
* The time taken to craft a message was reduced by 35% with the help of our lazy typing function
## What we learned.
We are able to learn the ins-and-outs of react-native framework, it saves us work to use android studio to create the application.
## What's next for UpLync
The next step for UpLync is to create an AI that learns the way how the user communicates with their peers and provide a suitable sentence structure. This application offers room to provide support for other languages and hopefully move into wearable technology. | ## Inspiration
We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible!
## What it does
This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.)
## How we built it
Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone.
## Challenges we ran into
It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture
## Accomplishments that we're proud of
After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition.
## What we learned
Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware.
## What's next for i4Noi
We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people. | partial |
## Inspiration
At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment.
## What It Does
reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients.
## How We Built It
To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data.
## Challenges We Ran Into
Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe.
However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team.
## Accomplishments That We're Proud Of
Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience.
## What We Learned
This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication.
## What's Next for reFresh
There are a number of features that we would like to implement going forward. Possible avenues of improvement would include:
* User accounts to allow ingredients and plans to be saved and shared
* Improvement in our search to fetch more mainstream and relevant recipes
* Simplification of ingredient selection page by combining ingredients and meals in one centralized page | ## Inspiration
Almost 2.2 million tonnes of edible food is discarded each year in Canada alone, resulting in over 17 billion dollars in waste. A significant portion of this is due to the simple fact that keeping track of expiry dates for the wide range of groceries we buy is, put simply, a huge task. While brainstorming ideas to automate the management of these expiry dates, discussion came to the increasingly outdated usage of barcodes for Universal Product Codes (UPCs); when the largest QR codes can store [thousands of characters](https://stackoverflow.com/questions/12764334/qr-code-max-char-length), why use so much space for a 12 digit number?
By building upon existing standards and the everyday technology in our pockets, we're proud to present **poBop**: our answer to food waste in homes.
## What it does
Users are able to scan the barcodes on their products, and enter the expiration date written on the packaging. This information is securely stored under their account, which keeps track of what the user has in their pantry. When products have expired, the app triggers a notification. As a proof of concept, we have also made several QR codes which show how the same UPC codes can be encoded alongside expiration dates in a similar amount of space, simplifying this scanning process.
In addition to this expiration date tracking, the app is also able to recommend recipes based on what is currently in the user's pantry. In the event that no recipes are possible with the provided list, it will instead recommend recipes in which has the least number of missing ingredients.
## How we built it
The UI was made with native android, with the exception of the scanning view which made use of the [code scanner library](https://github.com/yuriy-budiyev/code-scanner).
Storage and hashing/authentication were taken care of by [MongoDB](https://www.mongodb.com/) and [Bcrypt](https://github.com/pyca/bcrypt/) respectively.
Finally in regards to food, we used [Buycott](https://www.buycott.com/)'s API for UPC lookup and [Spoonacular](https://spoonacular.com/) to look for valid recipes.
## Challenges we ran into
As there is no official API or publicly accessible database for UPCs, we had to test and compare multiple APIs before determining Buycott had the best support for Canadian markets. This caused some issues, as a lot of processing was needed to connect product information between the two food APIs.
Additionally, our decision to completely segregate the Flask server and apps occasionally resulted in delays when prioritizing which endpoints should be written first, how they should be structured etc.
## Accomplishments that we're proud of
We're very proud that we were able to use technology to do something about an issue that bothers not only everyone on our team on a frequent basis (something something cooking hard) but also something that has large scale impacts on the macro scale. Some of our members were also very new to working with REST APIs, but coded well nonetheless.
## What we learned
Flask is a very easy to use Python library for creating endpoints and working with REST APIs. We learned to work with endpoints and web requests using Java/Android. We also learned how to use local databases on Android applications.
## What's next for poBop
We thought of a number of features that unfortunately didn't make the final cut. Things like tracking nutrition of the items you have stored and consumed. By tracking nutrition information, it could also act as a nutrient planner for the day. The application may have also included a shopping list feature, where you can quickly add items you are missing from a recipe or use it to help track nutrition for your upcoming weeks.
We were also hoping to allow the user to add more details to the items they are storing, such as notes for what they plan on using it for or the quantity of items.
Some smaller features that didn't quite make it included getting a notification when the expiry date is getting close, and data sharing for people sharing a household. We were also thinking about creating a web application as well, so that it would be more widely available.
Finally, we strongly encourage you, if you are able and willing, to consider donating to food waste reduction initiatives and hunger elimination charities.
One of the many ways to get started can be found here:
<https://rescuefood.ca/>
<https://secondharvest.ca/>
<https://www.cityharvest.org/>
# Love,
# FSq x ANMOL | ## Inspiration
Each year, over approximately 1.3 billion tonnes of produced food is wasted ever year, a startling statistic that we found to be truly unacceptable, especially for the 21st century. The impacts of such waste are wide spread, ranging from the millions of starving individuals around the world that could in theory have been fed with this food to the progression of global warming caused by the greenhouse gases released as a result of emissions from decaying food waste. Ultimately, the problem at hand was one that we wanted to fix using an application, which led us precisely to the idea of Cibus, an application that helps the common householder manage the food in their fridge with ease and minimize waste throughout the year.
## What it does
Essentially, our app works in two ways. First, the app uses image processing to take pictures of receipts and extract the information from it that we then further process in order to identify the food purchased and the amount of time till that particular food item will expire. This information is collectively stored in a dictionary that is specific to each user on the app. The second thing our app does is sort through the list of food items that a user has in their home and prioritize the foods that are closest to expiry. With this prioritized list, the app then suggests recipes that maximize the use of food that is about to expire so that as little of it goes to waste as possible once the user makes the recipes using the ingredients that are about to expire in their home.
## How we built it
We essentially split the project into front end and back end work. On the front end, we used iOS development in order to create the design for the app and sent requests to the back end for information that would create the information that needed to be displayed on the app itself. Then, on the backend, we used flask as well as Cloud9 for a development environment in order to compose the code necessary to help the app run. We incorporated image processing APIs as well as a recipe API in order to help our app accomplish the goals we set out for it. Furthermore, we were able to code our app such that individual accounts can be created within it and most of the functionalities of it were implemented here. We used Google Cloud Vision for OCR and Microsoft Azure for cognitive processing in order to implement a spell check in our app.
## Challenges we ran into
A lot of the challenges initially derived from identifying the scope of the program and how far we wanted to take the app. Ultimately, we were able to decide on an end goal and we began programming. Along the way, many road blocks occurred including how to integrate the backend seamlessly into the front end and more importantly, how to integrate the image processing API into the app. Our first attempts at the image processing API did not end as well as the API only allowed for one website to be searched at a time for at a time, when more were required to find instances of all of the food items necessary to plug into the app. We then turned to Google Cloud Vision, which worked well with the app and allowed us to identify the writing on receipts.
## Accomplishments that we're proud of
We are proud to report that the app works and that a user can accurately upload information onto the app and generate recipes that correspond to the items that are about to expire the soonest. Ultimately, we worked together well throughout the weekend and are proud of the final product.
## What we learned
We learnt that integrating image processing can be harder than initially expected, but manageable. Additionally, we learned how to program an app from front to back in a manner that blends harmoniously such that the app itself is solid on the interface and in calling information.
## What's next for Cibus
There remain a lot of functionalities that can be further optimized within the app, like number of foods with corresponding expiry dates in the database. Furthermore, we would in the future like the user to be able to take a picture of a food item and have it automatically upload the information on it to the app. | partial |
## Inspiration
Relationships between mentees and mentors are very important for career success. People want to connect with others in a professional manner to give and receive career advice. While many professional mentoring relationships form naturally, it can be particularly difficult for people in minority groups, such as women and people of color, to find mentors who can relate to their personal challenges and offer genuine advice. This website can provide a platform for those people to find mentors that can help them in their professional career.
## What it does
This web application is a platform that connects mentors and mentees online.
## How we built it
Our team used a MongoDB Atlas database in the backend for users. In addition, the team used jQuery (JavaScript) and Flask (Python) to increase the functionality of the site.
## Challenges we ran into
There were many challenges that we ran into. Some of the biggest ones include authenticating the MongoDB server and connecting jQuery to Python.
## Accomplishments that I'm proud of
We are proud of our ability to create many different aspects of the project in parallel. In addition, we are proud of setting up a cloud database, organizing a multi-page frontend, designing a searching algorithm, and much of the stitching completed in Flask.
## What we learned
We learned a lot about Python, JavaScript, MongoDB, and GET/POST requests.
## What's next for Mentors In Tech
More mentors and advanced searching could further optimize our platform. | ## Inspiration 💡
The push behind EcoCart is the pressing call to weave sustainability into our everyday actions. I've envisioned a tool that makes it easy for people to opt for green choices when shopping.
## What it does 📑
EcoCart is your AI-guided Sustainable Shopping Assistant, designed to help shoppers minimize their carbon impact. It comes with a user-centric dashboard and a browser add-on for streamlined purchase monitoring.
By integrating EcoCart's browser add-on with favorite online shopping sites, users can easily oversee their carbon emissions. The AI functionality dives deep into the data, offering granular insights on the ecological implications of every transaction.
Our dashboard is crafted to help users see their sustainable journey and make educated choices. Engaging charts and a gamified approach nudge users towards greener options and aware buying behaviors.
EcoCart fosters an eco-friendly lifestyle, fusing AI, an accessible dashboard, and a purchase-monitoring add-on. Collectively, our choices can echo a positive note for the planet.
## How it's built 🏗️
EcoCart is carved out using avant-garde AI tools and a strong backend setup. While our AI digs into product specifics, the backend ensures smooth data workflow and user engagement. A pivotal feature is the inclusion of SGID to ward off bots and uphold genuine user interaction, delivering an uninterrupted user journey and trustworthy eco metrics.
## Challenges and hurdles along the way 🧱
* Regular hiccups with Chrome add-on's hot reloading during development
* Sparse online guides on meshing Supabase Google Auth with a Chrome add-on
* Encountered glitches when using Vite for bundling our Chrome extension
## Accomplishments that I'am proud of 🦚
* Striking user interface
* Working prototype
* Successful integration of Supabase in our Chrome add-on
* Advocacy for sustainability through #techforpublicgood
## What I've learned 🏫
* Integrating SGID into a NextJS CSR web platform
* Deploying Supabase in a Chrome add-on
* Crafting aesthetically appealing and practical charts via Chart.js
## What's next for EcoCart ⌛
* Expanding to more e-commerce giants like Carousell, Taobao, etc.
* Introducing a rewards mechanism linked with our gamified setup
* Launching a SaaS subscription model for our user base. | ## Inspiration
As a team of 4 female tech enthusiasts, we have witnessed firsthand the gender disparities in the tech industry, not just from our campus experiences, but also through events like hackathons, networking, and career fairs.
Women are under-represented in the tech industry: 1) Only 20% of tech roles are held by women; 2) The attrition/retention rate of women is 41% as compared to 17% for men; 3) Only 12% of tech-related patents are held by women. These are happening as a result of systemic barriers to entry to women in this industry, including a lack of robust support system and mentorship, unequal growth opportunities, and workplace culture.
Therefore, we share this vision of creating a platform where **women and non-binary** individuals are not just participating but are also being heard and thriving. We want to build a **more equitable and inclusive tech culture**.
## What it does
1. Niche Focus: Women and non-binary community so that they feel safe and comfortable sharing and participating.
2. Personal Touch & Engagement: More than just ‘coffee chats’ and ‘networking’, we provide the platform for people to build common experiences, share their personal stories and connect with individuals on a more personal level.
3. Personalized Support: We care about individual interests and career goals instead of mass feeding the content. Events and mentorship programs that suit one’s interests, as well as experience levels will be prioritized by our recommendation system.
4. Global Access: We aim to reach a wide demographics and provide a greater network for people to connect to
## How we built it
* UI/UX Design & Prototyping: Figma
* Front-end: Typescript, React, Tailwind CSS Next.JS, Next.JS, Nested Layout, Zustand for state management, context provider
* Back-end: Typescript, Next.js, Next.js Server Actions, Mongodb, socket.io server for real time chatting, Zod for validation
* API: authentication integration- Clerk and webhook; TinyMCE Rich Text editor react integration
## Challenges we ran into
User privacy: How to verify a user’s identity? Should we limit our audience to women and non-binary only or should it be open to everyone? What if users prefer not to share that personal information? | partial |
## Inspiration ✨
Seeing friends' lives being ruined through **unhealthy** attachment to video games. Struggling with regulating your emotions properly is one of the **biggest** negative effects of video games.
## What it does 🍎
YourHP is a webapp/discord bot designed to improve the mental health of gamers. By using ML and AI, when specific emotion spikes are detected, voice recordings are queued *accordingly*. When the sensor detects anger, calming reassurance is played. When happy, encouragement is given, to keep it up, etc.
The discord bot is an additional fun feature that sends messages with the same intention to improve mental health. It sends advice, motivation, and gifs when commands are sent by users.
## How we built it 🔧
Our entire web app is made using Javascript, CSS, and HTML. For our facial emotion detection, we used a javascript library built using TensorFlow API called FaceApi.js. Emotions are detected by what patterns can be found on the face such as eyebrow direction, mouth shape, and head tilt. We used their probability value to determine the emotional level and played voice lines accordingly.
The timer is a simple build that alerts users when they should take breaks from gaming and sends sound clips when the timer is up. It uses Javascript, CSS, and HTML.
## Challenges we ran into 🚧
Capturing images in JavaScript, making the discord bot, and hosting on GitHub pages, were all challenges we faced. We were constantly thinking of more ideas as we built our original project which led us to face time limitations and was not able to produce some of the more unique features to our webapp. This project was also difficult as we were fairly new to a lot of the tools we used. Before this Hackathon, we didn't know much about tensorflow, domain names, and discord bots.
## Accomplishments that we're proud of 🏆
We're proud to have finished this product to the best of our abilities. We were able to make the most out of our circumstances and adapt to our skills when obstacles were faced. Despite being sleep deprived, we still managed to persevere and almost kept up with our planned schedule.
## What we learned 🧠
We learned many priceless lessons, about new technology, teamwork, and dedication. Most importantly, we learned about failure, and redirection. Throughout the last few days, we were humbled and pushed to our limits. Many of our ideas were ambitious and unsuccessful, allowing us to be redirected into new doors and opening our minds to other possibilities that worked better.
## Future ⏭️
YourHP will continue to develop on our search for a new way to combat mental health caused by video games. Technological improvements to our systems such as speech to text can also greatly raise the efficiency of our product and towards reaching our goals! | ## Overview
We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses.
## Inspiration
Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out!
## What it does
SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with.
## How we built it
The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame.
## Challenges we ran into
Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour.
## Accomplishments that we're proud of
We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency.
## What we learned
We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees)
## What's next for SmartEQ
We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions.
In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy. | ## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap. | winning |
# HearU

## Inspiration
Speech is one of the fundamental ways for us to communicate with each other, but many are left out of this channel of communication. Around half a billion people around the world suffer from hearing loss, this is something we wanted to address and help remedy.
Without the ability to hear, many are locked out of professional opportunities to advance their careers especially for jobs requiring leadership and teamwork. Even worse, many have trouble creating fulfilling relationships with others due to this communication hurdle.
Our aim is to make this an issue of the past and allow those living with hearing loss to be able to do everything they wish to do easily.
HearU is the first step in achieving this dream.

## What it does
HearU is an inexpensive and easy-to-use device designed to help those with hearing loss understand the world around them by giving them subtitles for the world.
The device attaches directly to the user’s arm and automatically transcribes any speech heard by it to the user onto the attached screen. The device then adds the transcription on a new line to make sure it is distinct for the user.
A vibration motor is also added to alert the user when a new message is added. The device also has a built-in hazard detection to determine if there are hazards in the area such as fire alarms. HearU also allows the user to store conversations that took place locally on the device for privacy.
Most Importantly, HearU allows the user to interact with ease with others around them.
## How we built it
HearU is based on a Raspberry Pi 4, and uses a display, microphone and a servo motor to operate and interface with the user. A Python script analyzes the data from the microphone, and updates the google cloud API, recording the real-time speech and then displaying the information on the attached screen. Additionally, it uses the pyaudio module to measure the decibels of the sounds surrounding the device, and it is graphically presented to the user. The frontend is then rendered with a python library called pyQT. The entire device is enclosed in a custom-built 3d printed housing unit that is attached to the user’s arm.
This project required a large number of different parts that were not standard as we were trying to minimize size as much as possible but ended up integrating beautifully together.

## Challenges we ran into
The biggest challenge that we ran into was hardware availability. I needed to scavenge for parts between multiple different sites, which made connecting “interesting” to say the least since we had to use 4 HDMI connectors to get the right angles we needed. Finding a screen that was affordable, that would arrive on time and the right size for the project was not easy.
Trying to fit all the components into something that would fit on an arm was extremely difficult as well and we had to compromise in certain areas such as having the power supply in the pocket for weight and size constraints. We started using a raspberry pi zero w at first but found it was too underpowered for what we wanted to do so we had to switch to the pi4. This meant we had to redesign the enclosure.
We were not able to get adequate time to build the extra features we wanted such as automatic translation since we were waiting on the delivery of the parts, as well as spending a lot of time troubleshooting the pi. We even had the power cut out for one of our developers who had all the frontend code locally saved!
In the end, we are happy with what we were able to create in the time frame given.
## Accomplishments that we're proud of
I’m proud of getting the whole hack to work at all! Like I mentioned above, the hardware was an issue to get integrated and we were under a major time crunch. Plus working with a team remotely for a hardware project was difficult and our goal was very ambitious.
We are very proud that we were able to achieve it!
## What we learned
We learned how to create dense electronic devices that manage space effectively and we now have a better understanding of the iterative engineering design process as we went through three iterations of the project. We also learned how to use google cloud as it was the first time any of us have used it. Furthermore, we learned how to create a good-looking UI using just python and pyqt. Lastly, we learned how to manipulate audio and various audio fingerprinting algorithms to match audio to sounds in the environment.
## What's next for HearU
There are many things that we can do to improve HearU.
We would like to unlock the potential for even more communication by adding automatic language translation to break down all borders.
We can also miniaturize the hardware immensely by using more custom-built electronics, to allow HearU to be as cumbersome and as easy to use as possible.
We would work on Integrating a camera to track users’ hands so that we can convert sign language into speech so that those with hearing impairments can easily communicate with others, even if they don’t know sign language.
 | ## Inspiration
About 0.2 - 2% of the population suffers from deaf-blindness and many of them do not have the necessary resources to afford accessible technology. This inspired us to build a low cost tactile, braille based system that can introduce accessibility into many new situations that was previously not possible.
## What it does
We use 6 servo motors controlled by Arduino that mimic braille style display by raising or lowering levers based upon what character to display. By doing this twice per second, even long sentences can be transmitted to the person. All the person needs to do is put their palm on the device. We believe this method is easier to learn and comprehend as well as way cheaper than refreshable braille displays which usually cost more than $5,000 on an average.
## How we built it
We use Arduino and to send commands, we use PySerial which is a Python Library. To simulate the reader, we have also build a smartbot with it that relays information to the device. For that we have used Google's Dialogflow.
We believe that the production cost of this MVP is less than $25 so this product is commercially viable too.
## Challenges we ran into
It was a huge challenge to get the ports working with Arduino. Even with the code right, pyserial was unable to send commands to Arduino. We later realized after long hours of struggle that the key to get it to work is to give some time to the port to open and initialize. So by adding a wait of two seconds and then sending the command, we finally got it to work.
## Accomplishments that we're proud of
This was our first hardware have to pulling something like that together a lot of fun!
## What we learned
There were a lot of things that were learnt including the Arduino port problem. We learnt a lot about hardware too and how serial ports function. We also learnt about pulses and how sending certain amount of pulses we are able to set the servo to a particular position.
## What's next for AddAbility
We plan to extend this to other businesses by promoting it. Many kiosks and ATMs can be integrated with this device at a very low cost and this would allow even more inclusion in the society. We also plan to reduce the prototype size by using smaller motors and using steppers to move the braille dots up and down. This is believed to further bring the cost down to around $15. | ## Inspiration
Our inspiration for Servitium comes from a deep commitment to enhancing the lives of senior citizens by providing a centralized hub for essential services tailored to their unique needs.
## What it does
Servitium serves as a comprehensive service hub specifically designed for senior citizens. It streamlines access to various services, making daily tasks and support more accessible and efficient.
Users are provided with a user-friendly interface for finding services that fit their needs. The application also supports use of natural language to find services (e.g. can enter "I would like to find an affordable plumber" in the home search bar).
Users can also leave reviews for service providers which makes it easier for other users to find the service providers that best suit their needs.
## How we built it
Our platform integrates user-friendly interfaces and secure backend systems to ensure a seamless experience for both users and service providers.
* Front-end: React
* Back-end: Node.js, MongoDB
* Authentication: Auth0
* APIs used: OpenAI, Google Maps, Google Places, Nodemailer, Twilio
## Challenges we ran into
* Integrating OpenAI's API to client-side of the application
* Integrating Nodemailer into the back-end of the application
## Accomplishments that we're proud of
* Integrating AI into our stack - users can use natural language to find services
* Connecting back-end and front-end of our application
* Making use of Auth0 to provide authentication
* Introducing real-time texting and emailing notification feature
## What we learned
* Using third-party APIs
* Full stack development (working on both the server and the client side)
* How to setup and leverage MongoDB
## What's next for Servitium
* Setting up the service provider side of the application since the application currently only supports the client side
* Retrieving more data on the service provider side (e.g. geolocation) | partial |
## Inspiration
Having grown up in developing countries, our team understands that there are many people who simply cannot afford to visit doctors frequently (distance, money, etc.), even when regular check-ups are required. This brings forth the problem - patients in developing countries often have the money to buy medicine but not enough money to visit the doctor every time. Not only does this disparity lead to lower mortality rates for citizens and children but makes it difficult to seek help when you truly need it.
Our team aims to bridge that gap and provide patients with the healthcare they deserve by implementing "Pillar" stations in settings of need.
## What it does
Patients visit the pillar stations for at least one of three purposes:
1. Update doctor with medical symptoms
2. Get updates from doctors regarding their past symptoms and progress
3. Get medicine prescribed by doctors
For the first purpose, patients activate the Pillar stations (Amazon Echo) and are called on a secure, private line to discuss symptoms and describe how they've been feeling. Pillar's algorithm processes that audio and summarizes it through machine learning APIs and sends it to the remote doctor in batches. Our reason for choosing phone calls is to increase privacy, accessibility and feasibility. The summarized information which includes sentiment analysis, key word detection and entity identification is stored in the doctor's dashboard and the doctor can update fields as required such as new notes, medicine to dispense, specific instructions etc. The purpose of this action is to inform the doctor of any updates so the doctor is briefed and well-prepared to speak to the patient next time they visit the village. There are also emergency update features that allow the doctor to still be connected with patients he sees less often.
For the second purpose, patients receive updates and diagnosis from the doctor regarding the symptoms they explained during their last Pillar visit. This diagnosis is not based purely on a patient's described symptoms, it is an aggregation of in-person checkups and collected data on the patient that can be sent at any time. This mitigates the worry and uncertainty patients may have of not knowing whether their symptoms are trivial or severe. Most importantly it provides a sense of connection and comfort knowing knowledgable guidance is always by their side.
Finally, for the third purpose, patients receive medicine prescribed by doctors instantly (given the Pillar station has been loaded). This prevents patients' conditions from worsening early-on. The hardware dispenses exactly the prescribed amount while also reciting instructions from the doctor and sends SMS notifications along with it. The Pillar prototype dispenses one type of pill but there is evident potential for more complicated systems.
## How we built it
We built this project using a number of different software and hardware programs that were seamlessly integrated to provide maximum accessibility and feasibility. To begin, the entry point to the Pillar stations is through a complex **Voiceflow** schema connected to **Amazon Echo** that connects to our servers to process what patients describe and need. Voiceflow gives us the ability to easily make API calls and integrate voice, something we believe is more accessible than text or writing for the less-educated populations of developing countries. The audio is summarized by **Meaning Cloud API** and a custom algorithm and is sent to the Doctor's dashboard to evaluate. The dashboard uses **MongoDB Altas** to store patients' information, it allows for high scalability and flexibility for our document oriented model. The front-end of the the dashboard is built using jQuery, HTML5, CSS (Bootstrap) and JavaScript. It provides a visual model for doctors to easily analyze patient data. Doctors can also provide updates and prescriptions for the customer through the dashboard. The Pillar station can dispense prescription pills through the use of **Arduino** (programmed with C). The pill dispense mechanism is triggered through a Voiceflow trigger and a Python script that polls for that trigger. This makes sense for areas with weak wi-fi. Finally, everything is connected through a **Flask** server which creates a host of endpoints and is deployed on **Heroku** for other components to communicate. Another key aspect is that patients can also be reminded of periodic visits to local Pillar stations using **Avaya's SMS & Call Transcription** services. Again, for individuals surviving more than living, often appointments and prescriptions are forgotten.
Through this low-cost and convenient service, we hope to create a world of more accessible healthcare for everyone.
## Challenges and What We Learned
* Hardware issues, we had a lot of difficulties getting the Raspberry Pi to work with the SD card. We are proud that we resolved this hardware issue by switching to Arduino. This was a risk but our problem solving abilities endured.
* The heavy theme of voice throughout our hack was new to most of the team and was a hurdle at first to adapt to non-text data analysis
* For all of us, we also found it to be a huge learning curve to connect both hardware and software for this project. We are proud that we got the project to work after hours on end of Google searches, Stack Overflow Forums and YouTube tutorials.
## What's next for Pillar
* We originally integrated Amazon Web Services (Facial Recognition) login for our project but did not have enough time to polish it. For added security reasons, we would polish and implement this feature in the future. This would also be used to provide annotated and analyzed images for doctors to go with symptom descriptions.
* We also wanted to visualize a lot of the patient's information in their profile dashboard to demonstrate change over time and save that information to the database
* Hardware improvements are boundless and complex pill dispensary systems would be the end goal | ## Inspiration
The inspiration behind Sampson is the abuse and overuse of prescription drugs. This can come in both the form of accidental and on purpose. Our team’s main focus is on assisting patients that are not able responsibly manage their own medication. Often this is a big issue with Alzheimer's or Dementia patients. Sampson’s aim is to prevent this from happening and allow these patients and their families to be less stressed about their medication.
## What it does
Sampson is a cloud-connected, remote managed pill dispensary system intended to assist patients with their prescriptions. The system is controlled by each patient's physician through a centralized database where information such as the type of medicine the user requires, the frequency and schedule of usage of this medicine. Each pill dispenser is equipped with its own pill dispenser mechanism as well as a completely sealed case that does not allow users to directly access their bulk medication. This however is able to be accessed by pharmacists or qualified technicians to refill. Each of these pill holders is connected to an IoT device that is able to communicate with the system’s centralized database. This system is able to get information on pill dosages and scheduling as well as send data about the level that the pill container is at. This same centralized system is able to be accessed by doctors and physicians for them to be able to live update a patient’s prescription from anywhere if necessary.
## How we built it
The team built the system on a variety of frameworks. The centralized database was built with Python, HTML, and CSS using Django Framework. The IoT device was built on an Intel Edison Board using Python. The Prototype Hardware was built on an Arduino 101 using Arduino’s Software and integrated libraries. The team also developed a Simple Socket server from scratch hosted on the Intel Edison Board.
## Challenges we ran into
One of the major challenges the team faced was getting all the systems to communicate together (Physician Database, IoT Device and Prototype Hardware). The biggest challenge of all was having the IoT device be able to communicate with the database through the Simple Socket Server to be able to get information about the user of the device. One of the challenges with the prototype hardware was that we were unable to determine in the timeframe how to also run it through the Intel Edison board and in turn had to control all the hardware through an Arduino. This meant the team had to come up with another way of transmitting important data to the Arduino in order to have a cohesive final product.
## Accomplishments that we're proud of
* Setting up a Simple Socket Servo on the Intel Edison
* Creating a functional prototype out of Arduino and cardboard
* 3D cad model of proposed product
## What we learned
What did we not learn about? The team took on a very ambitious approach to tackle what we felt is a very pertinent and (relatively) simple to fix problem in the medical sector. Throughout this project the team learnt a lot about web services and hosting of servers as well as how IoT devices connect to a centralized system.
## What's next for Sampson
In the future the team hopes to further develop the web platform for Doctors to create a more thought out and user friendly application. There is also a high incentive to create an app or communication system to talk to the user to remind them to take their medication. It is also incredibly important to improve the encryption used to protect patient data. The team would also like to develop a portable version of the system for use while away during the day or on vacation. The team has also proposed the usefulness of such a system in controlling more common household medicines that are still very dangerous to children and adults alike. | ## Inspiration
In school Group project is common. Students intent to divide the tasks and assign it to the team members. Sometimes it's frustrating to always ask students for their progress. That's why I decided to build a web application that allows students/ other collaborators to share their progress with each other and assign a task to one another.
User can use the website to manage their own projects or group projects.
## What it does
1. Allows a team admin create a task and assign it to members (not fully developed during this hackathon)
2. Individual User can add their own task, mark it's status, deadline day, and others
3. Individual can be in a multiple groups
## How we built it
## Challenges I ran into
Technically speaking, I had challenges querying the database, and coding for frontend. Frontend development is not my strong skill, but I have managed to write a code anyway. However the main challenge was working solo, I did not have a team to brainstorm together. This led me to many errors and correction.
## What's next for worklistCollab
After this hackathon, I'll continue working on this project.
There are many unique features I am thinking to add to this project.
If you have any question, or would like to collaborate contact me at [Ibsa Abraham]([email protected]) [Discord](iblight#1988) | partial |
## Personal Statement
It all started when our team member (definitely not Parth), let's call him Marth, had a crush on a girl who was a big fan of guitar music. He decided to impress her by playing her favorite song on the guitar, but there was one problem - Marth had never played the guitar before.
Determined to win her over, Marth spent weeks practicing the song, but he just couldn't get the hang of it. He even resorted to using YouTube tutorials, but it was no use. He was about to give up when he had a crazy idea - what if he could make the guitar play the song for him?
That's when our team got to work. We spent months developing an attachment that could automatically parse any song from the internet and play it on the guitar. We used innovative arm technology to strum the strings and servos on the headstock to press the chords, ensuring perfect sound every time.
Finally, the day arrived for Marth to show off his new invention to the girl of his dreams. He nervously set up the attachment on his guitar and selected her favorite song. As the guitar began to play, the girl was amazed. She couldn't believe how effortlessly Marth was playing the song. Little did she know, he had a secret weapon!
Marth's invention not only won over the girl, but it also sparked the idea for our revolutionary product. Now, guitar players of all levels can effortlessly play any song they desire. And it all started with a boy, a crush, and a crazy idea.
## Inspiration
Our product, Strum it Up, was inspired by one team member's struggle to impress a girl with his guitar skills. After realizing he couldn't play, he and the team set out to create a solution that would allow anyone to play any song on the guitar with ease.
## What it does
Strum it Up is an attachment for the guitar that automatically parses any song from the internet and uses an innovative arm technology to strum the strings and servos on the headstock to help press the chords, ensuring perfect sound every time.
## How we built it
We spent hours developing Strum it Up using a combination of hardware and software. We used APIs to parse songs from the internet, custom-built arm technology to strum the strings, and servos on the headstock to press the chords.
## Challenges we ran into
One of the biggest challenges we faced was ensuring that the guitar attachment could accurately strum and press the chords on a wide range of guitar models. This was because different models have different actions (action is the height between strings and the fretboard, the more the height, the harder you need to press the string) We also had to ensure that the sound quality was top-notch and that the attachment was easy to use.
## Accomplishments that we're proud of
We're incredibly proud of the final product - Strum it Up. It's a game-changer for guitar players of all levels and allows anyone to play any song with ease. We're also proud of the innovative technology we developed, which has the potential to revolutionize the music industry.
## What we learned
Throughout the development process, we learned a lot about guitar playing, sound engineering, and hardware development. We also learned the importance of persistence, dedication, and teamwork when it comes to bringing a product to market.
## What's next for Strum it Up
We're excited to see where Strum it Up will take us next. We plan to continue improving the attachment, adding new features, and expanding our reach to guitar players all over the world. We also hope to explore how our technology can be used in other musical applications. | ## Inspiration
In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other.
## What it does
**Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?)
Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective.
After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity.
## How we built it
**Tech Stack**
* Front-end/UI
+ Flutter and dart
+ Adobe XD
* Backend
+ Firebase
- Cloud Firestore
- Cloud Storage
- Firebase Authentication
**Details**
* Front end was built after developing UI mockups/designs
* Heavy use of advanced widgets and animations throughout the app
* Creation of multiple widgets that are reused around the app
* Backend uses gmail authentication with firebase.
* Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages.
* Images are stored in firebase storage to keep the source files together.
## Challenges we ran into
* Initially connecting Firebase to the front-end
* Managing state while implementing multiple complicated animations
* Designing backend and mapping users with each other and allowing them to chat.
## Accomplishments that we're proud of
* The user interface we made and animations on the screens
* Sign up and login using Firebase Authentication
* Saving user info into Firestore and storing images in Firebase storage
* Creation of beautiful widgets.
## What we're learned
* Deeper dive into State Management in flutter
* How to make UI/UX with fonts and colour palates
* Learned how to use Cloud functions in Google Cloud Platform
* Built on top of our knowledge of Firestore.
## What's next for Change My Mind
* More topics and User settings
* Implementing ML to match users based on maturity and other metrics
* Potential Monetization of the app, premium analysis on user conversations
* Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC. | ## Inspiration
Our inspiration was to look at the world of gaming in a *different* way, little did we know that we would literally be looking at it in a **different** way. We inspired to build a real time engine that would allow for the quick development of keyboard games.
## What it does
Allows users to use custom built libraries to quickly set up player objects, instances and keyboard program environments on a local server that can support multiple threads and multiple game instances. We built multiple games using our engine to demonstrate its scalability.
## How we built it
The server was built using Java and the front end was built using Visual C++. The server includes customizable objects and classes that can easily be modified to create a wide array of products. All of the server creation/game lobbies/multi threading/real time is handled by the engine. The front end engine created in C++ creates a 16x6 grid on the Corsair keyboards, allocates values to keys, registers keys and pre defined functions for easy LED manipulation. The front end engine provides the base template for easy manipulation and fast development of programs on the keyboard.
## Challenges we ran into
Our unique selling proposition was the ability to use our engine to create real time programs/games. Creating this involved many challenges primarily surrounding latency and packet loss. Due to the fact that our engine was expected to allow every keyboard playing in a lobby to reflect the state of every other place, and update the LED UI without delay, this required a low latency which became a problem using 1and1 servers located in the UK leading to the response time with different keyboards increasing up to a few seconds.
## Accomplishments that we're proud of
* Low latency
* Good Syncronization
* Full multiplayer support
* Full game library expansion support
* Creative use of limited hardware
* Multithreading support (Ability to create an infinite number of game rooms)
## What we learned
* Group Version Control
* Planning & organization
* Use of Corsair Gaming SDK
* Always comment, commit and save working versions of code! | winning |
## Inspiration
One of our team members, Andy, ended up pushing back his flu shot as a result of the lengthy wait time and large patient count. Unsurprisingly, he later caught the flu and struggled with his health for just over a week. Although we joke about it now, the reality is many medical processes are still run off outdated technology and can easily be streamlined or made more efficient. This is what we aimed to do with our project.
## What it does
Streamlines the process of filling out influenza vaccine forms for both medical staff, as well as patients.
Makes the entire process more accessible for a plethora of demographics without sacrificing productivity.
## How we built it
Front-End built in HTML/CSS/Vanilla JavaScript (ES6)
Back-End built with Python and a Flask Server.
MongoDB for database.
Microsoft Azure Vision API, Google Cloud Platform NLP for interactivity.
## Challenges we ran into
Getting Azure's Vision API to get quality captures to be able to successfully pull the meaningful data we wanted.
Front-End to Back-End communication with GCP NLP functions triggering events.
## Accomplishments that we're proud of
Successfully implementing cloud technologies and tools we had little/no experience utilizing, coming into UofTHacks.
The entire project overall.
## What we learned
How to communicate image data via webcam and Microsoft Azure's Vision AP and analyze Optical Character Recognition results.
Quite a bit about NLP tendencies and how to get the most accurate/intended results when utilizing it.
Github Pages cannot deploy Flask servers LOL.
How to deploy with Heroku (as a result of our failure with Github Pages).
## What's next for noFluenza
Payment system for patients depending on insurance coverage
Translation into different languages. | ## Inspiration
When the first experimental COVID-19 vaccine became available in China, hundreds of people started queuing outside hospitals, waiting to get that vaccine. Imagine this on a planetary scale when the whole everybody has to be vaccinated all around the world. There's a big chance while queuing they can spread the virus to people around them or maybe get infected because they cannot perform social distancing at all. We sure don't want that to happen.
The other big issue is that there are lots of conspiracy theories, rumors, stigma, and other forms of disinformation simultaneously spread across our social media about COVID-19 and it's vaccine. This misinformation creates frustrations for users many asking, we really don't know which one is right? Which one is wrong?
## What it does
Immunize is a mobile app that can save your life and save your time. The goal is to make the distribution of mass-vaccination become more effective, faster, and less crowded. With this app, you can book your vaccine appointment based on your own preference. So the user can easily choose the hospital based on the nearest location and easily schedule an appointment based on their availability.
In addition, based on the research we found that most of Covid-19 vaccines requires 2 doses given in 3 weeks apart to achieve that high effectiveness. And there's a big probability that people can forget to return for a follow-up shot. We can minimize that probability. This app will automatically schedule the patient for the 2nd vaccination so there is a less likelihood of user error. The reminder system (as notification feature) that will remind them in their phone when they have appointment that day.
## How we built it
We built the prototype using flutter as our client to support mobile. We integrated radar.io for hospital search. For facial recognition we used GCP and SMS reminders we used twilio. The mobile client connected to firebase: using firebase for auth, firebase storage for avatars and firestore for user metadata storage. The second backend host used datastax.
## Challenges we ran into
Working with an international team was very challenging with team members 12+ hours apart. All of us were learning something new whether it was flutter, facial recognition or experimenting with new APIs. Flutter APIs were very experimental, the camera API had to be rolled back two major version which occurred in less than 2 months to find a viable working version compatible with online tutorials
## Accomplishments that we're proud of
The features:
1. **QR Code Feature** for storing all personal data + health condition, so user don't need to wait for a long queue of administrative things.
2. **Digital Registration Form** checking if user is qualified of COVID-19 vaccine and which vaccine suits best.
3. **Facial Recognition** due to potential fraud in people who are not eligible for vaccination attempting to get limited supplies of vaccine, we implemented facial recognition to confirm the user for the appointment is the same one that showed up.
4. **Scheduling Feature** based on date, vaccine availability, and the nearby hospital.
5. **Appointment History** to track all data of patients, this data can be used for better efficiency of mass-vaccination in the future.
6. **Immunize Passport** for vaccine & get access to public spaces. This will create domino effect for people to get vaccine as soon as possible so that they can get access.
7. **Notification** to remind the patients every time they have appointment/ any important news via SMS and push notifications
8. **Vaccine Articles** - to ensure the user can get the accurate information from a verified source.
9. **Emergency Button** - In case there are side effects after vaccination.
10. **Closest Hospitals/Pharmacies** - based on a user's location, users can get details about the closest hospitals through Radar.io Search API.
## What we learned
We researched and learned a lot about the facts of COVID-19 Vaccine; Some coronavirus vaccines may work better in certain populations than others. And there may be one vaccine that seems to work better in the elderly than in younger populations. Alternatively, one may work better in children than it works in the elderly. Research suggests, the coronavirus vaccine will likely require 2 shots to be effective in which taken 21 days apart for Pfizer's vaccine and 28 days apart for Moderna's remedy.
## What's next for Immunize
Final step is to propose this solution to our government. We really hope this app could be implemented in real life and be a solution for people to get COVID-19 vaccine effectively, efficiently, and safely. Polish up our mobile app and build out an informational web app and a mobile app for hospital staff to scan QR codes and verify patient faces (currently they have to use the same app as the client) | ## Inspiration
When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless.
## What it does
* Touchless is an accessible and contact-free solution for gathering form information.
* Allows users to interact with forms using voices and touchless gestures.
* Users use different gestures to answer different questions.
* Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no.
* Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated.
* Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices.
## How we built it
* Gesture and voice components are written in Python.
* The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols.
* SpeechRecognition recognizes user speech
* The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises.
* We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database.
## Challenges we ran into
* Tried to set up a Cerner API for FHIR data, but had difficulty setting it up.
* As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data.
## Accomplishments we’re proud of
This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective.
## What we learned
We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects.
## What’s next for Touchless
In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components. | partial |
## Inspiration
Indecision is a problem a lot of people face at meal times, and we wanted to create a way to help people *decide* what meals to make at home while also being able to make use of the food they **already** have laying around. This helps to minimize food waste, while also saving you a trip to the grocery store!
## What it does
How does RecipeVision do this? Recipe Vision is a easy to use, but very powerful mobile application. You take a picture of your fridge, pantry, cabinets, etc, and our app scans it, tells you what it thinks you have, and then gives you recipes using what you have. You have the choice of taking a new picture, or using an existing picture. Take a picture, press a button, get recipes. **It's that easy.**
## How we built it
We implemented Microsoft's Cognitive API, specifically Computer Vision, to analyze the pictures taken and give us information that we could then parse and use to find recipes. With the food information given from Microsoft's algorithms, we can then search our database for recipes that can use the ingredients you have.
<https://www.microsoft.com/cognitive-services/en-us/computer-vision-api>
## Challenges we ran into
We were challenged by time constraints, so we were unable to implement a database of recipes to pull from based on the food our app detected. However, we did come up with two plans of action to tackle this. One would be to use Spoonacular's Food API, which can parse through a their huge database of recipes based on inputted ingredients.
<https://market.mashape.com/spoonacular/recipe-food-nutrition#find-by-ingredients>
The other option would be to create our own database using SQL on Azure or AWS and have more control over input parameters.
Additionally, Microsoft's Computer Vision analysis, while covering an expansive range of categories, is not always the best fit for determining the type of food in a picture. It struggles to identifies multiple foods in the same picture, and is not always completely accurate. However, we were usually able to get a good result for a picture of a single food item.
## Accomplishments that we're proud of
We are very proud that we were able to **successfully feed our smartphone images to Microsoft's Cognitive services and display the resulting analysis to the user.** This service has a lot of powerful potential uses, and we believe our app is very useful and practical.
## What we learned
We learned a ton about Microsoft's Cognitive Services and how to implement them using Java. From this experience, we got a lot of experience working with APIs made by others and reading their code thoroughly. We all expanded our knowledge of Android Studio, and worked on UI as well. It was also amazing to see how many different APIs there were out there available for public use.
## What's next for RecipeVision
We've got a lot in store. In the future, it would be great if we could implement our application onto a stationary Kinect based imaging camera mounted in the fridge and pantry, which would enable real time analysis of food stock. Another cool idea would be integration onto HoloLens, where users would be able to simply look inside their pantry and be given recipes in augmented reality based on what they are seeing. | ## Inspiration
Many of us, including our peers, struggle on deciding what to cook. We usually have a fridge full of items but are not sure what exactly to make with those items. This leads us to eating outside or buying even more groceries to follow along to a recipe.
* We want to be able to use what we have
* Reduce our waste
* Get new and easy ideas
## What it does
The user first takes a picture of the items in their fridge. They can then upload the image to our application. Using computer vision technology, we detect what are the exact items present in the picture (their fridge). After obtaining a list of the ingredients the user in their fridge this data is then passed along and processed with a database of 1000 quick and easy recipes.
## How we built it
* We designed the mobile and desktop website using Figma
* The website was developed using JavaScript and node.js
* We use Google Cloud Vision API to detect items in the picture
* This list of items in then processed along a database of recipes
* Best matching recipes are returned to the user
## Challenges we ran into
We ran through a lot of difficulties and challenges while building this web app most of which we were able to overcome with help from each other and learning on the fly.
The first challenge we ran into was building and training a machine learning model to apply multi-class object detection on the images the user inputs. This is tricky as there is no proper dataset of images of vegetables, fruits, meats, condiments, other items all together. After various experiments on our own machine learning models from scratch we then attempted using multiple pre-existing models and tools for our case. We found Google Cloud Vision API was doing the best job out of all that was available. Thus, we invested in Google Vision and using their API for our prototype currently.
The second challenge was getting the correct recipes according to the data received from the artificial intelligence. We are using a database of 1000 recipes and set a threshold for the minimum of number of items needed to match (ingredients the user has - to - ingredients the recipe requires). Our assumption is the user already has the basic ingredients such as salt, pepper, salt, butter, oil, etc.
## Accomplishments that we're proud of
* Coming up with an idea that solves a problem every member of our team and many peers we interviewed face
* Using modern artificial intelligence to solve a major part of our problem (detecting ingredients/groceries) from a given image
* Designing a a very good looking and user-friendly UI with an excellent user-experience (quick and easy)
## What we learned
Each team member learned a new or enhanced a current skill during this hackathon which is what we were here for. We learned to use newer tools, such as google cloud, figma, others to streamline our product development.
## What's next for Xcellent Recipes
\**We truly believe in our product and its usefulness for customers. We will continue working on Xcellent Recipes with a product launch in the future. The next steps include: \**
1. Establishing a backend server
2. Create or obtain our own data for training a ML model for our use case
3. Fine tune recipes
4. Company Launch | ## Inspiration
Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS.
## What it does
macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve.
## How we built it
DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript.
## Challenges we ran into
Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all.
## Accomplishments that we're proud of
We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with.
## What we learned
We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities.
## What's next for macroS
We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this! | losing |
## Inspiration
The day to day process of navigating to websites and then adding products to the cart is time-consuming. So We thought of making a voice-enabled shopping assistant which can search and add products to the cart just with a voice command.
## What it does
It can search and add products to the cart just with a voice command. It is also capable of performing a day-to-day activity such as playing music, opening applications, etc
## How I built it
We have built it in python using the selenium library and performing web automation
## Challenges I ran into
Web automation was really tough
## Accomplishments that I'm proud of
The final product which performs on voice command is an accomplishment.
## What I learned
I have learned many new skills like web automation, python libraries, etc
## What's next for Voice Shop - Voice-enabled Shopping Assistant | ## Inspiration
Let’s face it: getting your groceries is hard. As students, we’re constantly looking for food that is healthy, convenient, and cheap. With so many choices for where to shop and what to get, it’s hard to find the best way to get your groceries. Our product makes it easy. It helps you plan your grocery trip, helping you save time and money.
## What it does
Our product takes your list of grocery items and searches an automatically generated database of deals and prices at numerous stores. We collect this data by collecting prices from both grocery store websites directly as well as couponing websites.
We show you the best way to purchase items from stores nearby your postal code, choosing the best deals per item, and algorithmically determining a fast way to make your grocery run to these stores. We help you shorten your grocery trip by allowing you to filter which stores you want to visit, and suggesting ways to balance trip time with savings. This helps you reach a balance that is fast and affordable.
For your convenience, we offer an alternative option where you could get your grocery orders delivered from several different stores by ordering online.
Finally, as a bonus, we offer AI generated suggestions for recipes you can cook, because you might not know exactly what you want right away. Also, as students, it is incredibly helpful to have a thorough recipe ready to go right away.
## How we built it
On the frontend, we used **JavaScript** with **React, Vite, and TailwindCSS**. On the backend, we made a server using **Python and FastAPI**.
In order to collect grocery information quickly and accurately, we used **Cloudscraper** (Python) and **Puppeteer** (Node.js). We processed data using handcrafted text searching. To find the items that most relate to what the user desires, we experimented with **Cohere's semantic search**, but found that an implementation of the **Levenshtein distance string algorithm** works best for this case, largely since the user only provides one to two-word grocery item entries.
To determine the best travel paths, we combined the **Google Maps API** with our own path-finding code. We determine the path using a **greedy algorithm**. This algorithm, though heuristic in nature, still gives us a reasonably accurate result without exhausting resources and time on simulating many different possibilities.
To process user payments, we used the **Paybilt API** to accept Interac E-transfers. Sometimes, it is more convenient for us to just have the items delivered than to go out and buy it ourselves.
To provide automatically generated recipes, we used **OpenAI’s GPT API**.
## Challenges we ran into
Everything.
Firstly, as Waterloo students, we are facing midterms next week. Throughout this weekend, it has been essential to balance working on our project with our mental health, rest, and last-minute study.
Collaborating in a team of four was a challenge. We had to decide on a project idea, scope, and expectations, and get working on it immediately. Maximizing our productivity was difficult when some tasks depended on others. We also faced a number of challenges with merging our Git commits; we tended to overwrite one anothers’ code, and bugs resulted. We all had to learn new technologies, techniques, and ideas to make it all happen.
Of course, we also faced a fair number of technical roadblocks working with code and APIs. However, with reading documentation, speaking with sponsors/mentors, and admittedly a few workarounds, we solved them.
## Accomplishments that we’re proud of
We felt that we put forth a very effective prototype given the time and resource constraints. This is an app that we ourselves can start using right away for our own purposes.
## What we learned
Perhaps most of our learning came from the process of going from an idea to a fully working prototype. We learned to work efficiently even when we didn’t know what we were doing, or when we were tired at 2 am. We had to develop a team dynamic in less than two days, understanding how best to communicate and work together quickly, resolving our literal and metaphorical merge conflicts. We persisted towards our goal, and we were successful.
Additionally, we were able to learn about technologies in software development. We incorporated location and map data, web scraping, payments, and large language models into our product.
## What’s next for our project
We’re very proud that, although still rough, our product is functional. We don’t have any specific plans, but we’re considering further work on it. Obviously, we will use it to save time in our own daily lives. | ## Inspiration
We've worked on e-commerce stores (Shopify/etc), and managing customer support calls was tedious and expensive (small businesses online typically have no number for contact), despite that ~60% of customers prefer calls for support questions and personalization. We wanted to automate the workflow to drive more sales and save working hours.
Existing solutions require custom setup in workflows for chatbots or asking; people still have to answer 20 percent of questions, and a lot are confirmation questions (IBM). People have question fatigue with bots to get to an actual human.
## What it does
It's an embeddable javascript widget/number for any e-commerce store or online product catalog that lets customers call, text, or message on site chat about products personalized to them, processing returns, and general support. We plan to expand out of e-commerce after signing on 100 true users who love us. Boost sales while you are asleep instead of directing customers to a support ticket line.
We plan to pursue routes of revenue with:
* % of revenue from boosted products
* Monthly subscription
* Costs savings from reduced call center capacity requirements
## How we built it
We used a HTML/CSS frontend connected to a backend of Twilio (phone call, transcription, and text-to-speech) and OpenAI APIs (LLMs, Vector DBQA customization).
## Challenges we ran into
* Deprecated Python functionality for Twilio that we did not initially realize, eventually discovered this while browsing documentation and switched to JS
* Accidentally dumped our TreeHacks shirt into a pot of curry
## Accomplishments that we're proud of
* Developed real-time transcription connected to a phone call, which we then streamed to a custom-trained model -- while maintaining conversational-level latency
* Somehow figured out a way to sleep
* Became addicted to Pocari Sweat
## What we learned
We realized the difficulty of navigating documentation while traversing several different APIs. For example, real-time transcription was a huge challenge.
Moreover, we learned about embedding functions that allowed us to customize the LLM for our use case. This enabled us to provide a performance improvement to the existing model while also not adding much compute cost. During our time at TreeHacks, we became close with the Modal team as they were incredibly supportive of our efforts. We also greatly enjoyed leveraging OpenAI to provide this critical website support.
## What's next for Ellum
We are releasing the service to close friends who have experienced these problems, particularly e-commerce distributors and beta-test the service with them. We know some Shopify owners who would be down to demo the service, and we hope to work closely with them to grow their businesses.
We would love to pursue our pain points even more for instantly providing support and setting it up. Valuable features, such as real-time chat, that can help connect us to more customers can be added in the future. We would also love to test out the service with brick-and-mortar stores, like Home Depot, Lowes, CVS, which also have a high need for customer support.
Slides: <https://drive.google.com/file/d/1fLFWAgsi1PXRVi5upMt-ZFivomOBo37k/view?usp=sharing>
Video Part 1: <https://youtu.be/QH33acDpBj8>
Video Part 2: <https://youtu.be/gOafS4ZoDRQ> | partial |
## Inspiration
Enabling Accessible Transportation for Those with Disabilities
AccessRide is a cutting-edge website created to transform the transportation experience for those with impairments. We want to offer a welcoming, trustworthy, and accommodating ride-hailing service that is suited to the particular requirements of people with mobility disabilities since we are aware of the special obstacles they encounter.
## What it does
Our goal is to close the accessibility gap in the transportation industry and guarantee that everyone has access to safe and practical travel alternatives. We link passengers with disabilities to skilled, sympathetic drivers who have been educated to offer specialised assistance and fulfill their particular needs using the AccessRide app.
Accessibility:-
The app focuses on ensuring accessibility for passengers with disabilities by offering vehicles equipped with wheelchair ramps or lifts, spacious interiors, and other necessary accessibility features.
Specialized Drivers:-
The app recruits drivers who are trained to provide assistance and support to passengers with disabilities. These drivers are knowledgeable about accessibility requirements and are
committed to delivering a comfortable experience.
Customized Preferences:-
Passengers can specify their particular needs and preferences within the app, such as requiring a wheelchair-accessible vehicle, additional time for boarding and alighting, or any specific assistance required during the ride.
Real-time Tracking:-
Passengers can track the location of their assigned vehicle in real-time, providing peace of mind and ensuring they are prepared for pick-up.
Safety Measures:-
The app prioritizes passenger safety by conducting driver background checks, ensuring proper vehicle maintenance, and implementing safety protocols to enhance the overall travel experience.
Seamless Payment:-
The app offers convenient and secure payment options, allowing passengers to complete their transactions electronically, reducing the need for physical cash handling
## How we built it
We built it using django, postgreSQL and Jupyter Notebook for driver selection
## Challenges we ran into
Ultimately, the business impact of AccessRide stems from its ability to provide a valuable and inclusive service to people with disabilities. By prioritizing their needs and ensuring a comfortable and reliable transportation experience, the app can drive customer loyalty, attract new users, and make a positive social impact while growing as a successful business.
To maintain quality service, AccessRide includes a feedback and rating system. This allows passengers to provide feedback on their experience and rate drivers based on their level of assistance, vehicle accessibility, and overall service quality. It was a challenging part in this event.
## Accomplishments that we're proud of
We are proud that we completed our project. We look forward to develop more projects.
## What we learned
We learned about the concepts of django and postgreSQL. We also learnt many algorithms in machine learning and implemented it as well.
## What's next for Accessride-Comfortable ride for all abilities
In conclusion, AccessRide is an innovative and groundbreaking project that aims to transform the transportation experience for people with disabilities. By focusing on accessibility, specialized driver training, and a machine learning algorithm, the app sets itself apart from traditional ride-hailing services. It creates a unique platform that addresses the specific needs of passengers with disabilities and ensures a comfortable, reliable, and inclusive transportation experience.
## Your Comfort, Our Priority "Ride with Ease, Ride with Comfort“ | ## Inspiration
One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss.
Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently.
## What it does
### Object Identification:
Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback.
### Facial Recognition:
It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment.
### Interactive Question Answering:
Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics.
### Voice Commands:
Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra.
## How we built it
* Python
* OpenCV
* GCP & Firebase
* Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency)
## Challenges we ran into
* Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk)
* Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds.
* Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard
## Accomplishments that we're proud of
We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off.
## What we learned
* How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users.
* How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to create docker containers to deploy google cloud-based flask applications to host our dashboard.
How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user.
## What's next for Saight
### Optimizing the Response Time
Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments.
### Testing Various Materials for the Mount
The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision. | ## Inspiration
We were inspired to create such a project since we are all big fans of 2D content, yet have no way of actually animating 2D movies. Hence, the idea for StoryMation was born!
## What it does
Given a text prompt, our platform converts it into a fully-featured 2D animation, complete with music, lots of action, and amazing-looking sprites! And the best part? This isn't achieved by calling some image generation API to generate a video for our movie; instead, we call on such APIs to create lots of 2D sprites per scene, and then leverage the power of LLMs (CoHere) to move those sprites around in a fluid and dynamic matter!
## How we built it
On the frontend we used React and Tailwind, whereas on the backend we used Node JS and Express. However, for the actual movie generation, we used a massive, complex pipeline of AI-APIs. We first use Cohere to split the provided story plot into a set of scenes. We then use another Cohere API call to generate a list of characters, and a lot of their attributes, such as their type, description (for image gen), and most importantly, Actions. Each "Action" consists of a transformation (translation/rotation) in some way, and by interpolating between different "Actions" for each character, we can integrate them seamlessly into a 2D animation.
This framework for moving, rotating and scaling ALL sprites using LLMs like Cohere is what makes this project truly stand out. Had we used an Image Generation API like SDXL to simply generate a set of frames for our "video", we would have ended up with a janky stop-motion video. However, we used Cohere in a creative way, to decide where and when each character should move, scale, rotate, etc. thus ending up with a very smooth and human-like final 2D animation.
## Challenges we ran into
Since our project is very heavily reliant on BETA parts of Cohere for many parts of its pipeline, getting Cohere to fit everything into the strict JSON prompts we had provided, despite the fine-tuning, was often quite difficult.
## Accomplishments that we're proud of
In the end, we were able to accomplish what we wanted! | winning |
## About
Learning a foreign language can pose challenges, particularly without opportunities for conversational practice. Enter SpyLingo! Enhance your language proficiency by engaging in missions designed to extract specific information from targets. You select a conversation topic, and the spy agency devises a set of objectives for you to query the target about, thereby completing the mission. Users can choose their native language and the language they aim to learn. The website and all interfaces seamlessly translate into their native tongue, while missions are presented in the foreign language.
## Features
* Choose a conversation topic provided by the spy agency and it will generate a designated target and a set of objectives to discuss.
* Engage the target in dialogue in the foreign language on any subject! As you achieve objectives, they'll be automatically marked off your mission list.
* Witness dynamically generated images of the target, reflecting the topics they discuss, after each response.
* Enhance listening skills with automatically generated audio of the target's response.
* Translate the entire message into your native language for comprehension checks.
* Instantly translate any selected word within the conversation context, providing additional examples of its usage in the foreign language, which can be bookmarked for future review.
* Access hints for formulating questions about the objectives list to guide interactions with the target.
* Your messages are automatically checked for grammar and spelling, with explanations in your native language for correcting foreign language errors.
## How we built it
With the time constraint of the hackathon, this project was built entirely on the frontend of a web application. The TogetherAI API was used for all text and image generation and the ElevenLabs API was used for audio generation. The OpenAI API was used for detecting spelling and grammar mistakes.
## Challenges we ran into
The largest challenge of this project was building something that can work seamlessly in **812 different native-foreign language combinations.** There was a lot of time spent on polishing the user experience to work with different sized text, word parsing, different punctuation characters, etc.
Even more challenging was the prompt engineering required to ensure the AI would speak in the language it is supposed to. The chat models frequently would revert to English if the prompt was in English, even if the prompt specified the model should respond in a different language. As a result, there are **over 800** prompts used, as each one has to be translated into every language supported during build time.
There was also a lot of challenges in reducing the latency of the API responses to make for a pleasant user experience. After many rounds of performance optimizations, the app now effectively generates the text, audio, and images in perceived real time.
## Accomplishments that I'm proud of
The biggest challenges also yielded the biggest accomplishments in my eyes. Building a chatbot that can be interacted with in any language and operates in real time by myself in the time limit was certainly no small task.
I'm also exceptionally proud of the fact that I honestly think it's fun to play. I've had many projects that get dumped on a dusty shelf once completed, but the fact that I fully intend to keep using this after the hackathon to improve my language skills makes me very happy.
## What we learned
I had never used these APIs before beginning this hackathon, so there was quite a bit of documentation that I had to read to understand for how to correctly stream the text & audio generation.
## What's next for SpyLingo
There are still more features that I'd like to add, like different types of missions for the user. I also think the image prompting can use some more work since I'm not as familiar with image generation.
I would like to productionize this project and setup a proper backend & database for it. Maybe I'll set up a stripe integration and make it available for the public too! | ## Inspiration
We all love learning languages, but one of the most frustrating things is seeing an object that you don't know the word for and then trying to figure out how to describe it in your target language. Being students of Japanese, it is especially frustrating to find the exact characters to describe an object that you see. With this app, we want to change all that. Huge advances have been made in computer vision in recent years that have allowed us to accurately detect all kinds of different image matter. Combined with advanced translation software, we found the perfect recipe to make an app that could capitalize upon these technologies and help foreign languages students all around the world.
## What it does
The app allows you to either take a picture of an object or scene with your iPhone camera or upload an image from your photo library. You then select a language that would like to translate words into. The app then remotely contacts the Microsoft Azure Cognitive Services using an HTTP request from within the app to create tags from the image you uploaded. These tags are then uploaded to the Google Cloud Platform services to translate those tags into your target language. After doing this, a list of english-foreign language word pairs is displayed, relating to the image tags.
## How we built it
The app was built using Xcode and was coded in Swift. We split up to work on different parts of the project. Kent worked on interfacing with Microsoft's computer vision AI and created the basic app structure. Isaiah worked on setting up Google Cloud Platform translation and contributed to adding functionality for multiple languages. Ivan worked on designing the logo for the app and most of the visuals.
## Challenges we ran into
A lot of time was spent figuring out how to deal with HTTP requests and json, two things none of us have much experience with, and then using them in swift to contact remote services through our app. After this major hurdle was overcome, there was a concurrency issue as both the vision AI and translation requests were designed to be run in parallel to the main thread of the app's execution, however this created some problems for updating the app's UI. We ended up fixing all the issues though!
## Accomplishments that we're proud of
We are very proud that we managed to utilize some really awesome cloud services like Microsoft Azure's Cognitive Services and Google Cloud Platform, and are happy that we managed to create an app that worked at the end of the day!
## What we learned
This was a great learning experience for all of us, both in terms of the technical skills we acquired in connecting to cloud services and in terms of the teamwork skills we acquired.
## What's next for Literal
Firstly, we would add more languages to translate and make much cleaner UI. Then we would enable it to run on cloud services indefinitely instead of just on a temporary treehacks-based license. After that, there are many more cool ideas that we could implement into the app! | >
> Domain.com domain: IDE-asy.com
>
>
>
## Inspiration
Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code.
## What it does
Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE.
## How we built it
We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE.
## Challenges we ran into
>
> "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume.
> The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad*
>
>
> "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac*
>
>
> "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir*
>
>
> "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris*
>
>
>
## Accomplishments that we're proud of
>
> "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad*
>
>
> "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac*
>
>
> "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir*
>
>
> "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris*
>
>
>
## What we learned
>
> "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad*
>
>
> "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac*
>
>
> "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir*
>
>
> "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris*
>
>
>
## What's next for QuickCode
We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code. | losing |
# QThrive
Web-based chatbot to facilitate journalling and self-care. Built for QHacks 2017 @ Queen's University. | ## Inspiration
Toronto is famous because it is tied for the second longest average commute time of any city (96 minutes, both ways). People love to complain about the TTC and many people have legitimate reasons for avoiding public transit. With our app, we hope to change this. Our aim is to change the public's perspective of transit in Toronto by creating a more engaging and connected experience.
## What it does
We built an iOS app that transforms the subway experience. We display important information to subway riders, such as ETA, current/next station, as well as information about events and points of interest in Toronto. In addition, we allow people to connect by participating in a local chat and multiplayer games.
We have small web servers running on ESP8266 micro-controllers that will be implemented in TTC subway cars. These micro-controllers create a LAN (Local Area Network) Intranet and allow commuters to connect with each other on the local network using our app. The ESP8266 micro-controllers also connect to the internet when available and can send data to Microsoft Azure.
## How we built it
The front end of our app is built using Swift for iOS devices, however, all devices can connect to the network and an Android app is planned for the future. The live chat section was built with JavaScript. The back end is built using C++ on the ESP8266 micro-controller, while a Python script handles the interactions with Azure. The ESP8266 micro-controller runs in both access point (AP) and station (STA) modes, and is fitted with a button that can push data to Azure.
## Challenges we ran into
Getting the WebView to render properly on the iOS app was tricky. There was a good amount of tinkering with configuration due to the page being served over http on a local area network (LAN).
Our ESP8266 Micro-controller is a very nifty device, but such a low cost device comes with strict development rules. The RAM and flash size were puny and special care was needed to be taken to ensure a stable foundation. This meant only being able to use vanilla JS (no Jquery, too big) and keeping code as optimized as possible. We built the live chat room with XHR and Ajax, as opposed to using a websocket, which is more ideal.
## Accomplishments that we're proud of
We are proud of our UI design. We think that our app looks pretty dope!
We're also happy of being able to integrate many different features into our project. We had to learn about communication between many different tech layers.
We managed to design a live chat room that can handle multiple users at once and run it on a micro-controller with 80KiB of RAM. All the code on the micro-controller was designed to be as lightweight as possible, as we only had 500KB in total flash storage.
## What we learned
We learned how to code as lightly as possible with the tight restrictions of the chip. We also learned how to start and deploy on Azure, as well as how to interface between our micro-controller and the cloud.
## What's next for Commutr
There is a lot of additional functionality that we can add, things like: Presto integration, geolocation, and an emergency alert system.
In order to host and serve larger images, the ESP8266' measly 500KB of storage is planning on being upgraded with an SD card module that can increase storage into the gigabytes. Using this, we can plan to bring fully fledged WiFi connectivity to Toronto's underground railway. | ## 💡**Inspiration**
The apparent rise of mental health crises in post-secondary institutions over the past few years has unveiled a small part of a long existing issue. University students have always seemed a bit sad, but in all of history, it’s never been this bad. With 1 in 5 seriously contemplating suicide in the last year and nearly half in Ontario reporting unmet mental health needs, we could all use a caring, conventional, and confidential friend who points you in the right direction and answers any questions you may have. In creating TextChecks, our hope is to connect students to a system they can depend on as they undertake the process of understanding their identity and place in the world.
## 🧠**What it does**
TextChecks is an Azure conversational AI that can find local supports for students and answer any mental health questions by pulling from our database of three hundred medical articles and two common therapy workbooks. It can be accessed through SMS or our web interface. Users simply begin sending START and follow the prompts accordingly. It follows a phased approach and guides the user through viewing resources on their mental health topic of interest, learning about available supports near them, and free-for-all Q&A for any other questions they may have.
## 💻**How we built it**
The front-end of our web interface was built with HTML, CSS, JavaScript, and BootStrap. We use a Twilio API to enable text messaging and use the Azure Bot Service’s provided elements for the web chatbox. To generate all of our responses, we used Azure Cognitive Search and Language Studio to extract question-answer pairs from three hundred medical articles and two common therapy workbooks to train the model. When there isn’t a direct match, it parses over the full contents of each article and book to pick the best match.
## ❓**Challenges we ran into**
Our primary challenge was implementing the phased approach through the chatbot. Deciding the user flow and how we wanted information to be accessed was a significant factor in our application. We spent the most time with flow diagrams and maps, determining how users may interact with the application and translating it into Azure. Another challenge was ensuring data quality and ensuring our question-answer pairs had high confidence levels. In the beginning, we started with importing self-help books, but found that our question-answer pairs were often inaccurate. Instead, we opted for peer-reviewed medical articles and therapy workbooks that provided better pairing. We also wanted our application to work on desktop as well as mobile, and spent time tinkering with our front-end to ensure responsive design.
## 💗**Accomplishments that we're proud of**
1. Although some of us have created chatbots in the past, TextChecks exceeds their capabilities and utilizes additional technologies to offer a more personalized experience.
2. It can be difficult to control the quality of responses from the bot, but we are satisfied with our approach in creating more accurate question-answer pairs.
3. Our user interface is clean, simple, and easy to use. Our animated homepage is a bonus!
4. It was our first time working with the Twilio API and we were able to get our chatbot working in SMS in under 30 minutes!
## 📚**What we learned**
We thoroughly explored Azure Bot Service, Language Studio and Azure Cognitive Search to create our application, by seeing how their capabilities best fit with our desired application features. We also learned how to train our bot with quality data, as well as connect it to SMS!
## 🚘**What's next for TextChecks**
In the future, we hope to have users calling into an automated phone system enabled by Natural Language Understanding and text-to-speech services. We can also partner with universities themselves, connecting with databases and exam schedules to pre-emptively reach out to stressed students. Finally, we can work together with Kids Help Phones and other textable lines to help lighten their basic question-and-answer load and potentially reroute in distress to their counselors. | winning |
## Not All Backs are Packed: An Origin Story (Inspiration)
A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality.
## The Got Your Back, Pack: **U N P A C K E D** (What's it made of)
GPS Location services,
9000 mAH power battery,
Solar charging,
USB connectivity,
Keypad security lock,
Customizable RBG Led,
Android/iOS Application integration,
## From Backed Up to Back Pack (How we built it)
## The Empire Strikes **Back**(packs) (Challenges we ran into)
We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not.
Also, making connection of hardware and app with the API was tricky.
## Something to Write **Back** Home To (Accomplishments that we're proud of)
## Packing for Next Time (Lessons Learned)
## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!")
The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider. | ## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy. | ## Inspiration
No API? No problem! LLMs and AI have solidified the importance of text-based interaction -- APIcasso aims to harden this concept by simplifying the process of turning websites into structured APIs.
## What it does
APIcasso has two primary functionalities: ✌️
1. **Schema generation**: users provide a website URL and an empty JSON schema, which is automatically filled and returned by the Cohere AI backend as a well-structured API. It also generates a permanent URL for accessing the completed schema, allowing for easy reference and integration.
2. **Automation**: users provide a website URL and automation prompt, APIcasso returns an endpoint for the automation.
For each JSON schema or automation requested, the user is prompted to pay for their token via ETH and Meta Mask.
## How we built it
* The backend is Cohere-based and written in Python
* The frontend is Next.js-based and paired with Tailwind CSS
* Crypto integration (ETH) is done through Meta Mask
## Challenges we ran into
We initially struggled with clearly defining our goal -- this idea has a lot of exciting potential projects/functionalities associated with it. It was difficult to pick just two -- (1) schema generation and (2) automation.
## Accomplishments that we're proud of
Being able to properly integrate the frontend and backend despite working separately throughout the majority of the weekend.
Integrating ETH verification.
Working with Cohere (a platform that all of us were new to).
Functioning on limited sleep.
## What we learned
We learned a lot about the intricacies of working with real-time schema generation, creating dynamic and interactive UIs, and managing async operations for seamless frontend-backend communication.
## What's next for APIcasso
If given extra time, we would plan to extend APIcasso’s capabilities by adding support for more complex API structures, expanding language support, and offering deeper integrations with developer tools and cloud platforms to enhance usability. | winning |
## Inspiration
Our spark to tackle this project was ignited by a teammate's immersive internship at a prestigious cardiovascular research society, where they served as a dedicated data engineer. Their firsthand encounters with the intricacies of healthcare data management and the pressing need for innovative solutions led us to the product we present to you here.
Additionally, our team members drew motivation from a collective passion for pushing the boundaries of generative AI and natural language processing. As technology enthusiasts, we were collectively driven to harness the power of AI to revolutionize the healthcare sector, ensuring that our work would have a lasting impact on improving patient care and research.
With these varied sources of inspiration fueling our project, we embarked on a mission to develop a cutting-edge application that seamlessly integrates AI and healthcare data, ultimately paving the way for advancements in data analysis and processing with generative AI in the healthcare sector.
## What it does
Fluxus is an end to end workspace for data processing and analytics for healthcare workers. We leverage LLMs to translate text to SQL. The model is preprocessed to specifically handle Intersystems IRIS SQL syntax. We chose Intersystems as our database for storing electronic health records (EHRs) because this enabled us to leverage their integratedML queries. Not only can healthcare workers generate fully functional SQL queries for their datasets with simple text prompts, they now can perform instantaneous predictive analysis on datasets with no effort. The power of AI is incredible isn't it.
For example, a user can simply type in "Calculate the average BMI for children and youth from the Body Measures table." and our app will output
"SELECT AVG(BMXBMI) FROM P\_BMX WHERE BMDSTATS = '1';"
and you can simply run it on the built in intersystems database. With Intersystems IntegratedML, with the simple input of "create a model named DemographicsPrediction to predict the language of ACASI Interview based on age and marital status from the Demographics table.", our app will output
"CREATE MODEL DemographicsPrediction PREDICTING (AIALANGA) FROM P\_DEMO TRAIN MODEL DemographicsPrediction VALIDATE MODEL DemographicsPrediction FROM P\_DEMO SELECT \* FROM INFORMATION\_SCHEMA.ML\_VALIDATION\_METRICS;"
to instantly create train and validate an ML model that you can perform predictive analysis on with integratedML's "PREDICT" command. It's THAT simple!
Researchers and medical professionals working with big data now don't need to worry about the intricacies of SQL syntax, the obscurity of healthcare record formatting - column names and table names that do not give much information, and the need to manually dive into large datasets to find what they're looking for. With simple text prompts data processing becomes a no effort task, and predictive modelling with ML models becomes equally as effortless. See how tables come together without having to browse through large datasets with our DAG visualizations of connected tables/schemas.
## How we built it
Our project incorporated a multitude of components that went into the development. It was both overwhelming, but also satisfying seeing so many parts come together.
Frontend: The frontend was developed in Vue.js and utilized many modern day component libraries to give off a friendly UI. We also incorporated a visualization tool using third party graph libraries to draw directed acyclic graph (DAG) workflows between tables, showing the connection from one table to another that has been developed after querying the original table. To show this workflow in real time, we implemented a SQL parser API (node-sql-parser) to get a list of source tables used in the LLM generated query and used the DAGs to visually represent the list of source tables in connection to the newly modified/created table.
Backend: We used Flask for the backend of our web service, handling multiple API endpoints from our data sources and LLM/prompt engineering functionality.
Intersystems: We connected an IRIS intersystems database to our application and loaded it with a load of healthcare data leveraging intersystems libraries for connectors with Python.
LLMs: We originally started looking into OpenAI's codex models and their integration, but ultimately worked with GPT-3.5 turbo which made it easy to fine-tune our data (to a certain degree) so our LLM could detect prompts and generate syntactically accurate queries with a high degree of accuracy. We wrapped the LLM and preprocessing of prompt engineering features as an API endpoint to integrate with our backend.
## Challenges we ran into
* LLMs are not as magical as they look. There was nothing for us to train the kind of datasets that are used in healthcare. We had to manually push entire database schemas for our LLM to recognize and to attempt to fine-tune on in order to get queries that were accurate. This was intensive manual labour and a lot of frustrating failures with trying to fine-tune on both current and legacy LLM models provided by OpenAI. Ultimately we came to a promising result that delivered a solid degree of accuracy with some fine-tuning.
* Integrating everything together - putting together countless API endpoints (honestly felt like writing production code at a certain point), hosting to our frontend, wrapping the LLM as an API endpoint. Ultimately there's definitely pain points that still need to be addressed, and we plan to make this a long term project that will help us identify bottlenecks that we didn't have time to address within these 24 hours, while simultaneously expanding on our application.
## Accomplishments that we're proud of
We were all aware of how much we aimed to get done in a mere span of 24 hours. It seemed near impossible. But we were all on a mission, and had the drive to bring a whole new experience to data analytics and processing to the healthcare industry by leveraging the incredible power of generative AI. The satisfaction of seeing our LLM work, trying to fine-tune manually configured data hundreds of lines long and having it accurately give us queries for IRIS including integratedML queries, the frontend come to life, the countless API endpoints work and the integration of all our services for an application with high levels of functionality. Our team came together from different parts of the globe for this hackathon, but we were warriors that instantly clicked as a team and made the most of these past 24 hours by powered through day and night to deliver this product.
## What we learned
Just how insane AI honestly is.
A lot about SQL syntax, working with Intersystems, the highs and lows of generative AI, about all there is to know about current natural language to SQL processes leveraging generative AI thanks to like 5+ research papers.
## What's next for Fluxus
* Develop an admin platform so users can put in their own datasets
* Fine-tune the LLM for larger schemas and more prompts
* buying a hard drive | # SmartKart
A IoT shopping cart that follows you around combined with a cloud base Point of Sale and Store Management system. Provides a comprehensive solution to eliminate lineups in retail stores, engage with customers without being intrusive and a platform to implement detailed customer analytics.
Featured by nwHacks: <https://twitter.com/nwHacks/status/843275304332283905>
## Inspiration
We questioned the current self-checkout model. Why wait in line in order to do all the payment work yourself!? We are trying to make a system that alleviates much of the hardships of shopping; paying and carrying your items.
## Features
* A robot shopping cart that uses computer vision to follows you!
* Easy-to-use barcode scanning (with an awesome booping sound)
* Tactile scanning feedback
* Intuitive user-interface
* Live product management system, view how your customers shop in real time
* Scalable product database for large and small stores
* Live cart geo-location, with theft prevention | ## Inspiration
MISSION: Our mission is to create an intuitive and precisely controlled arm for situations that are tough or dangerous for humans to be in.
VISION: This robotic arm application can be used in the medical industry, disaster relief, and toxic environments.
## What it does
The arm imitates the user in a remote destination. The 6DOF range of motion allows the hardware to behave close to a human arm. This would be ideal in environments where human life would be in danger if physically present.
The HelpingHand can be used with a variety of applications, with our simple design the arm can be easily mounted on a wall or a rover. With the simple controls any user will find using the HelpingHand easy and intuitive. Our high speed video camera will allow the user to see the arm and its environment so users can remotely control our hand.
## How I built it
The arm is controlled using a PWM Servo arduino library. The arduino code receives control instructions via serial from the python script. The python script is using opencv to track the user's actions. An additional feature use Intel Realsense and Tensorflow to detect and track user's hand. It uses the depth camera to locate the hand, and use CNN to identity the gesture of the hand out of the 10 types trained. It gave the robotic arm an additional dimension and gave it a more realistic feeling to it.
## Challenges I ran into
The main challenge was working with all 6 degrees of freedom on the arm without tangling it. This being a POC, we simplified the problem to 3DOF, allowing for yaw, pitch and gripper control only. Also, learning the realsense SDK and also processing depth image was an unique experiences, thanks to the hardware provided by Dr. Putz at the nwhacks.
## Accomplishments that I'm proud of
This POC project has scope in a majority of applications. Finishing a working project within the given time frame, that involves software and hardware debugging is a major accomplishment.
## What I learned
We learned about doing hardware hacks at a hackathon. We learned how to control servo motors and serial communication. We learned how to use camera vision efficiently. We learned how to write modular functions for easy integration.
## What's next for The Helping Hand
Improve control on the arm to imitate smooth human arm movements, incorporate the remaining 3 dof and custom build for specific applications, for example, high torque motors would be necessary for heavy lifting applications. | winning |
## Meet Our Team :)
* Lucia Langaney, first-time React Native user, first in-person hackathon, messages magician, snack dealer
* Tracy Wei, first-time React Native user, first in-person hackathon, payments pro, puppy petter :)
* Jenny Duan, first in-person hackathon, sign-up specialist, honorary DJ
## Inspiration
First Plate was inspired by the idea that food can bring people together. Many people struggle with finding the perfect restaurant for a first date, which can cause stress and anxiety. By matching users with restaurants, First Plate eliminates the guesswork and allows users to focus on connecting with their potential partner over a shared culinary experience. In addition, food is a topic that many people are passionate about, so a food-based dating app can help users form deeper connections and potentially find a long-lasting relationship.
After all- the stomach is the way to the heart.
## What it does
Introducing First Plate, a new dating app that will change the way you connect with potential partners - by matching you with restaurants! Our app takes into account your preferences for cuisine location, along with your dating preferences such as age, interests, and more.
With our app, you'll be able to swipe through restaurant options that align with your preferences and match with potential partners who share your taste in food and atmosphere. Imagine being able to impress your date with a reservation at a restaurant that you both love, or discovering new culinary experiences together.
Not only does our app provide a fun and innovative way to connect with people, but it also takes the stress out of planning a first date by automatically placing reservations at a compatible restaurant. No more agonizing over where to go or what to eat - our app does the work for you.
So if you're tired of the same old dating apps and want to spice things up, try our new dating app that matches people with restaurants. Who knows, you might just find your perfect match over a plate of delicious food!
## How we built it
1. Figma mockup
2. Built React Native front-end
3. Added Supabase back-end
4. Implemented Checkbook API for pay-it-forward feature
5. Connecting navigation screens & debugging
6. Adding additional features
When developing a new app, it's important to have a clear plan and process in place to ensure its success. The first step we took was having a brainstorming session, where we defined the app's purpose, features, and goals. This helped everyone involved get on the same page and create a shared vision for the project. After that, we moved on to creating a Figma mockup, where we made visual prototypes of the app's user interface. This is a critical step in the development process as it allows the team to get a clear idea of how the app will look and feel. Once the mockup was completed, we commenced the React Native implementation. This step can be quite involved and requires careful planning and attention to detail. Finally, once we completed the app, we moved on to debugging and making final touches. This is a critical step in the process, as it ensures that the app is functioning as intended and any last-minute bugs or issues are resolved before submission. By following these steps, developers can create a successful app that meets the needs of its users and exceeds their expectations.
## Challenges we ran into
The app development using React Native was extremely difficult, as it was our first time coding in this language. The initial learning curve was steep, and the vast amount of information required to build the app, coupled with the time constraint, made the process even more challenging. Debugging the code also posed a significant obstacle, as we often struggled to identify and rectify errors in the codebase. Despite these difficulties, we persisted and learned a great deal about the React Native framework, as well as how to debug code more efficiently. The experience taught us valuable skills that will be useful for future projects.
## Accomplishments that we're proud of
We feel extremely proud of having coded First Plate as React Native beginners. Building this app meant learning a new programming language, developing a deep understanding of software development principles, and having a clear understanding of what the app is intended to do. We were able to translate an initial Figma design into a React Native app, creating a user-friendly, colorful, and bright interface. Beyond the frontend design, we learned how to create a login and sign-up page, securely connected to the Supabase backend, and integrated the Checkbook API for the "pay it forward" feature. Both of these features were also new to our team. Along the way, we encountered many React Native bugs, which were challenging and time-consuming to debug as a beginner team. We implemented front-end design features such as scroll view, flexbox, tab and stack navigation, a unique animation transition, and linking pages using a navigator, to create a seamless and intuitive user experience in our app. We are proud of our teamwork, determination, and hard work that culminated in a successful project.
## What we learned
In the course of developing First Plate, we learned many valuable lessons about app development. One of the most important things we learned was how to implement different views, and navigation bars, to create a seamless and intuitive user experience. These features are critical components of modern apps and can help to keep users engaged and increase their likelihood of returning to the app.
Another significant learning experience was our introduction to React Native, a powerful and versatile framework that allows developers to build high-quality cross-platform mobile apps. As previous Swift users, we had to learn the basics of this language, including how to use the terminal and Expo to write code efficiently and effectively.
In addition to learning how to code in React Native, we also gained valuable experience in backend development using Supabase, a platform that provides a range of powerful tools and features for building, scaling, and managing app infrastructure. We learned how to use Supabase to create a real-time database, manage authentication and authorization, and integrate with other popular services like Stripe, Slack, and GitHub.
Finally, we used the Checkbook API to allow the user to create digital payments and send digital checks within the app using only another user's name, email, and the amount the user wants to send. By leveraging these powerful tools and frameworks, we were able to build an app that was not only robust and scalable but also met the needs of our users. Overall, the experience of building First Plate taught us many valuable lessons about app development, and we look forward to applying these skills to future projects.
## What's next for First Plate
First Plate has exciting plans for the future, with the main focus being on fully implementing the front-end and back-end of the app. The aim is to create a seamless user experience that is efficient, secure, and easy to navigate. Along with this, our team is enthusiastic about implementing new features that will provide even more value to users. One such feature is expanding the "Pay It Forward" functionality to suggest who to send money to based on past matches, creating a streamlined and personalized experience for users. Another exciting feature is a feed where users can share their dining experiences and snaps of their dinner plates, or leave reviews on the restaurants they visited with their matches. These features will create a dynamic community where users can connect and share their love for food in new and exciting ways. In terms of security, our team is working on implementing end-to-end encryption on the app's chat feature to provide an extra layer of security for users' conversations. The app will also have a reporting feature that allows users to report any disrespectful or inappropriate behavior, ensuring that First Plate is a safe and respectful community for all. We believe that First Plate is a promising startup idea implementable on a larger scale. | ## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface | ## Inspiration
Food is capable of uniting all of us together, no matter which demographic we belong to or which cultures we identify with. Our team recognized that there was a problem with how challenging it can be for groups to choose a restaurant that accommodated everyone's preferences. Furthermore, food apps like Yelp and Zomato can often cause 'analysis paralysis' as there are too many options to choose from. Because of this, we wanted to build a platform to facilitate the process of coming together for food, and make the process as simple and convenient as possible.
## What it does
Bonfire is an intelligent food app that takes into account the food preferences of multiple users and provides a fast, reliable, and convenient recommendation based on the aggregate inputs of the group. To remove any friction while decision-making, Bonfire is even able to make a reservation on behalf of the group using Google's Dialogflow.
## How we built it
We used Android Studio to build the mobile application and connected it to a Python back-end. We used Zomato's API for locating restaurants and data collection, and Google Sheets API and Google Apps scripts to decide the optimal restaurant recommendation given the user's preferences. We then used Adobe XD to create detailed wireframes to visualize the app's UI/UX.
## Challenges we ran into
We found that Integrating all the API's into our app was quite challenging as some required Partner access privileges and restricted the amount of information we could request. In addition, choosing a framework to connect the back-end was a difficult.
## Accomplishments that we're proud of
As our team is comprised of students studying bioinformatics, statistics, and kinesiology, we are extremely proud to have been able to bring an idea to fruition, and we are excited to continue working on this project as we think it has promising applications.
## What we learned
We learned that trying to build a full-stack application in 24 hours is no easy task. We managed to build a functional prototype and a wireframe to visualize what the UI/UX experience should be like.
## What's next for Bonfire: the Intelligent Food App
For the future of Bonfire, we are aiming to include options for dietary restrictions and incorporating Google Duplex into our app for a more natural-sounding linguistic profile. Furthermore, we want to further polish the UI to enhance the user experience. To improve the quality of the recommendations, we plan to implement machine learning for the decision-making process, which will also take into account the user's past food preferences and restaurant reviews. | winning |
## Inspiration🧠
Even with today’s cutting edge technology and leading scientific research that helps us develop, advance and improve in everyday life, those with rare genetic diseases are still left behind.
Living with life threatening condition with little to no cure, considering, “less than 5% of more than 7,000 rare diseases believed to affect humans currently have an effective treatment”, is already frustrating, but when doctors aren’t knowledgeable/experienced enough to treat such cases, or when patients have only themselves to rely on to search for any experimental drugs, the everyday struggle becomes a nightmare to deal with.
But what’s even more tragic is despite there being “300 million people worldwide [suffering a rare disease], [where] approximately 4% of the total world population is affected by [one] at any given time” , people still have to go through the exhausting trial and error process of finding a cure/treatment, EVEN when in several cases, they share exactly the same disease!
Shockingly enough, there isn’t ANY collection of data or analysis being shared, on what medications/treatments work for different people, and which ones help or harm them!
**Citation**
Kaufmann, P., Pariser, A.R. & Austin, C. From scientific discovery to treatments for rare diseases – the view from the National Center for Advancing Translational Sciences – Office of Rare Diseases Research. Orphanet J Rare Dis 13, 196 (2018). <https://doi.org/10.1186/s13023-018-0936-x>
Wakap SN, Lambert DM, Alry A, et al. Estimating cumulative point prevalence of rare diseases: analysis of the Orphanet database [published online September 16, 2019]. Eur J Hum Genet. doi: 10.1038/s41431-019-0508-0. <https://ojrd.biomedcentral.com/articles/10.1186/s13023-018-0936-x>
## What it does 💻
For our project, we have tried our best to match Varient’s goal in partially helping develop a diagnosis assistance tool for the rare disease population (with genetic mutations), so that it becomes a crucial gadget in finding appropriate drug treatments, providing accurate and up to date information, while also facilitating support in decision making.
Our My Heroes gene assistant web app’s specific features include:
Ability to select images that indicate a relevant gene in the report
Generating and displaying relevant keywords, such as names of related disease and mutated gene names.
Providing insights on how the related disease can be treated.
Supporting patients understand key information from the reports.
The user interface includes: a User registration/login (for authorization and account information purposes), a Dropbox/file attachment ( for images), a Catalog of uploads (for the usability of modifying/deleting items), Display of labeled/annotated report, and a Summary page
## How we built it 🔧
1. Used Python for the backend and Machine learning component of the app.
2. Implemented pytesseract OCR to extract texts/key words (mutation names on ) from the images(image labeling) supplied by the report, and labeled them with OpenCV, along with;
3. Using spacy’s en\_ner\_bionlp13cg\_md (pretrained NLP model for medical report text processing) to extract relevant keywords from the texts.
4. Used streamlit library to deploy the machine learning web app.
5. Worked with React.js for frontend (login, signup, the navbar, settings), Firebase for User authentication and Google authentication integration and Firestore (NoSQL database) implementation as well as storage.
6. We utilized Google Docs/Discord for brainstorming, and Trello for distributing and keeping track of time and tasks assigned.
7. Utilized Figma for designing and prototyping.
## Challenges we ran into 🔥
1. Familiarizing ourselves with Figma to build a complex but easy to use medical record health app.
2. We had trouble integrating the NLP model part to the frontend and ended up using streamlit to make the backend functional.
3. Even though we were aware the Machine Learning part would take a significant chunk of our time, we didn't realize just how much it actually did. We also required and were working with all hands on deck which prevented us from other tasks.
4. With one of our members being a novice programmer and involved with another large scale event commitment taking place at the same time as this event, we were short one team member
5. Another team member lacking significant experience in Machine Learning and related technologies resulted in a lack of cohesiveness throughout the process.
6. None of us was familiar with how to use Flask, and only one of us was familiar with REST API’s. We also had several issues integrating with the frontend (connecting API’s, sending post requests, getting data back), and had to figure out an alternative solution by using Streamlit to display images, modify it using functions in Python, and display the new image and extracted keywords. We also had issues deploying the streamlit app, as we kept getting errors.
## Accomplishments that we're proud of 💪
We are proud of being able to collaborate and work together despite our overall lack of experience in Machine Learning and differences in previous experiences within the team mates.
We are also proud to have built a functional ML app, and make it usable to the user because we spent most of our time getting the NLP to work.
**How to run the app**
Pytesseract
For windows:
Via <https://github.com/UB-Mannheim/tesseract/wiki>
For mac
Download and install the spacy model:
Download en\_ner\_bionlp13cg\_md via <https://allenai.github.io/scispacy/>
pip install spacy
Pip install
## What we learned ✍️
. Restoring the health of the patients by streamlining the process and help the doctors provide the best treatment for such specific and rare diseases. (Our app could be used as an assistant (AI assistant?) Or personal record tracker or personal assistant 2. It facilitates universal information sharing, and keeps all the data in one place (some people might get private treatments which don't require the use if a health card, so they can input their info in this central platform for easier, quicker,and efficient process.
## What's next for My Heroes ✨
Integrating the machine learning app to the frontend so that the app can have actual users and a smooth, simple UI Design .
To improve the accessibility features of the app.
We would love to see our app to be in the hands of our patiently waiting users as soon as possible! We hope that with its improvements, it helps them provide some peace of mind, and hopefully makes life easy for them. | Check out our project at <http://gitcured.com/>
## Inspiration
* We chose the Treehacks Health challenge about creating a crowdsourced question platform for sparking conversations between patients and physicians in order to increase understanding of medical conditions.
* We really wanted to build a platform that did much more than just educate users with statistics and discussion boards. We also wanted to explore the idea that not many people understand how different medical conditions work conjunctively.
* Often, people don't realize that medical conditions don't happen one at a time. They can happen together, thus raising complications with prescribed medication that, when taken at the same time, can be dangerous together and may lead to unpredictable outcomes. These are issues that the medical community is well aware of but your average Joe might be oblivious to.
* Our platform encourages people to ask questions and discuss the effects of living with two or more common diseases, and take a closer look at the apex that form when these diseases begin to affect the effects of each other on one's body.
## What it does
* In essence, the platform wants patients to submit questions about their health, discuss these topics in a freestyle chat system while exploring statistics, cures and related diseases.
* By making each disease, symptom, and medication a tag rather than a category, the mixing of all topics is what fuels the full potential of this platform. Patients, and even physicians, who might explore the questions raised regarding the overlap between, for example Diabetes and HIV, contribute to the collective curiosity to find out what exactly happens when a patient is suffering both diseases at the same time, and the possible outcomes from the interactions between the drugs that treat both diseases.
* Each explored topic is searchable and the patient can delve quite deep into the many combinations of concepts. GitCured really is fueled by the questions that patients think of about their healthcare, and depend on their curiosity to learn and a strong community to discuss ideas in chat-style forums.
## How we built it
Languages used: Node.js, Sockets.IO, MongoDB, HTML/CSS, Javascript, ChartJS, Wolfram Alpha, Python, Bootstrap
## Challenges we ran into
* We had problems in implementing a multi-user real-time chat using sockets.io for every question that has been asked on our platform.
* Health data is incredibly hard to find. There are certain resources, such as data.gov and university research websites that are available, but there is no way to ensure quality data that can be easily parseable and usable for a health hack. Most data that we did find didn't help us much with the development of this app but it provided an insight to us to understand the magnitude of the health related problems.
* Another issue we faced was to differentiate ourselves from other services that meet part of the criteria of the prompt. Our focus was to critically think how each medical concept affects people, along with providing patients a platform to discuss their healthcare. The goal was to design a space that encourages creative and curious thinking, and ask questions that might never have been previously answered. We wanted to give patients a space to discuss and critically think about how each medical concept affects each other.
## Accomplishments that we're proud of
We were pretty surprised we got this far into the development of this app. While it isn't complete, as apps never are, we had a great experience of putting ideas together and building a health-focused web platform from scratch.
## What we learned
* There is a very big issue that there is no central and reliable source for health data. People may have clear statistics on finance or technology, but there is so much secrecy and inconsistencies that come with working with data in the medical field. This creates a big, and often invisible, problem where computer scientists find it harder and harder to analyze biomedical data compared to other types of data. If we hadn't committed to developing a patient platform, I think our team would have worked on designing a central bank of health data that can be easily implementable in new and important health software. Without good data, development of bio technology will always be slow when developers find themselves trapped or stuck. | ## Inspiration
**250,000 people in the United States die every year from medical negligence. However, it is not negligence that leads to the most fatalities, complications, and increased hospitalizations. Rather, it is the lack of universally accessible patient medical records and software to encourage teams of doctors to work together for every patient. After thousands of hours spent shadowing and working in the medical sector, our team set off to catalyze modern medicine.**
In the current system of medical records, Emergency Room physicians are given little to no information on patients, and in many severe cases, have no time to collect it. Patients with certain diseases can suffer fatal consequences if given many common medications, including Aspirin. ER doctors have no way of knowing this if the patient is unconscious or non expressive.
Doctors in different networks have limited communication. This is especially pertinent as almost half of all doctors are in private practice. When a patient’s doctors do not work together, misdiagnosis, polypharmacy, and extreme change of prescriptions become rampant. Commonly, a doctor will give a patient a new medication that does not work well with drugs being prescribed by the patient’s other doctors.
When a patient visits a new doctor for the first time, the doctor must get all their information by interviewing the patient. This means that doctors are forced to rely on a patient’s memory, rather than past medical records, to provide life or death information.
Doctors must be able to work together, using cloud based information, in order to provide modern healthcare.
## What it does
First: an example Next: an explanation
**A patient, John Doe, is involved in a car crash and suffers a severe left arm injury, and immediately rushed to the ER. At the hospital, a doctor scans Doe's medx ID to get Doe's profile on his phone. He checks known allergies to see if Doe is allergic to EpiPen which he wants to use to help stop bleeding. He sees that Doe is severely allergic and uses a pressure system to stop bleeding instead, avoiding a fatal allergic reaction.**
**ER doctors then perform surgery and note this as a procedure. The doctors want to prescribe Doe Vicodin as a pain reliever, but as they can see on his diagnosis medx profile he suffers from substance abuse disorder; so instead they raise the dosage of his already existing Norco prescription, a less addictive opioid.**
**Doe's primary doctor back home now sees on medx that Doe just had a left arm trauma procedure and that his dosage of Norco was raised. Now during Doe's visit to the primary doctor next week, the doctor is fully up to speed on Doe's medical history and can even message Doe's surgical team at the ER any questions, because they are now active in Doe's system. Without medx, Doe's primary doctor and surgical team are not connected and Doe's primary is likely to reduce the Norco prescription back to normal, thinking there was a mistake.**
medx exists as a website and as an accompanying app. A patient creates a profile on the website (age, ethnicity, ssn, name, username) and is **automatically generated a patient ID and data matrix**. The user then downloads the app and can now login to generate the same data matrix. The user will also be sent a physical ID card with this data matrix to keep in a wallet in case of severe accident when the phone cannot be unlocked (patient unconscious).
A physician creates a profile on the website with their DEA identifier, and can now input a patient's ID number when the patient comes to their office for a first visit, or use the medx app to scan a patient's data matrix in emergency medicine. They now gain access to a patient's medical records compliant with HIPPA based regulations and security focus. With medx, a doctor uses a streamlined, universal, cloud based system, instead of being at the mercy of a patient's memory or limited understanding of their medical profile.
Once a physician enter a patient's unique identifier or scans their data matrix, they can access the patient's medications, diagnosis, lab results, medical imaging results, past procedures, and doctor's notes for all of these. medx is unique in that it has built in discussion threads so that doctors can message in real time to coordinate patient care across specialties so that the entire network of care is always working together.
medx uses a collection of algorithms to detect dangerous combinations of medical data in a patient's profile and marks these in red to attract a doctor's attention. An example of this is that a patient with a rare disease called RCVS cannot take Aspirin or other NSAIDS. On the dashboard for a patient with RCVS who was just prescribed aspirin by a doctor, would appear a red warning under a section named "contraindications" which cross references a patient's known diseases and their prescriptions to find dangerous combinations.
medx allows a doctor to download a complete report for the patient so that the doctor can store this in their personal visit records.
## How we built it
medx was built in three parts: frontend, backend, and mobile. We built the backend as an express-based node.js web server using a REST api to deliver data from a mongodb database. The frontend is a dashboard-based interface using Bootstrap. The mobile app is an iOS app built using swift.
## Challenges we ran into
Using software to recognize and generate data matrices. Creating a modern, yet sophisticated, user interface with ease of use as its number one concern.
## Accomplishments that we're proud of
Working together and bridging our disciplines of medicine and computing respectively, to begin to help build a brighter, smarter future.
## What we learned
How to build and deploy a complex, fast, scalable web app. We also learned how to use Google Cloud to deploy a node.js web server. Most importantly, we learned the incredible power of technology to shape our world and transform people's lives.
## What's next for medx
Improving on cybersecurity to fully meet HIPPA standards. Employing machine learning to search for abnormal imaging results. Using machine learning to piece together all elements of medical profile to predict and suggest treatment plans.
(To try out the physician section of our website, please use DEA #: 31415926 Password: pass) | partial |
## Inspiration
Insurance Companies spend millions of dollars on carrying marketing campaigns to attract customers and sell their policies.These marketing campaigns involve giving people promotional offers, reaching out to them via mass marketing like email, flyers etc., these marketing campaigns usually last for few months to almost years and the results of such huge campaigns are inedible when raw.
## What it does
Intellisurance visualizes such campaign data and allows the insurance companies to understand and digest such data. These visualizations help the company decide whom to target next, how to grow their business? and what kind of campaigns or best media practices to reach to a majority of their potential customer base.
## How we built it
We wanted to give the insurance companies a clear overview of how their past marketing campaigns were, the pros , the cons, the ways to target a more specific group, the best practices etc. We also give information of how they gained customers over the period. Most key factor being the geographic location, we chose to display it over a map.
## Challenges we ran into
When we are dealing with the insurance campaign data we are dealing with millions of rows, compressing that data into usable information.
## Accomplishments that we're proud of
This part was the most challenging part where we pulled really necessary data and had algorithms to help users experience almost no lag while using the application. | ## Inspiration
When we first read Vitech's challenge for processing and visualizing their data, we were collectively inspired to explore a paradigm of programming that very few of us had any experience with, machine learning. With that in mind, the sentiment of the challenge themed around health care established relevant and impactful implications for the outcome of our project. We believe that using machine learning and data science to improve the customer experience of people in the market for insurance plans, would not only result in a more profitable model for insurance companies but improve the lives of the countless people who struggle to choose the best insurance plans for themselves at the right costs.
## What it does
Our scripts are built to parse, process, and format the data provided by Vitech's live V3 API database. The data is initially filtered using Solr queries and then formatted into a more adaptable comma-separated variable (CSV) file. This data is then processed by a different script through several machine learning algorithms in order to extract meaningful data about the relationship between an individual's personal details and the plan that they are most likely to choose. Additionally, we have provided visualizations created in R that helped us interpret the many data points more effectively.
## How we built it
We initially explored all of the ideas that we had regarding how exactly we planned to process the data and proceeded to pick Python as a suitable language and interface in which we believed that we could accomplish all of our goals. The first step was parsing and formatting data after which we began observing it through the visualization tools provided by R. Once we had a rough idea about how our data is distributed, we continued by making models using the h2o Python library in order to model our data.
## Challenges we ran into
Since none of us had much experience with machine learning prior to this project, we dived into many software tools we had never even seen before. Furthermore, the data provided by Vitech had many variables to track, so our deficiency in understanding of the insurance market truly slowed down our progress in making better models for our data.
## Accomplishments that we're proud of
We are very proud that we got as far as we did even though out product is not finalized. Going into this initially, we did not know how much we could learn and accomplish and yet we managed to implement fairly complex tools for analyzing and processing data. We have learned greatly from the entire experience as a team and are now inspired to continue exploring data science and the power of data science tools.
## What we learned
We have learned a lot about the nuances of processing and working with big data and about what software tools are available to us for future use.
## What's next for Vitech Insurance Data Processing and Analysis
We hope to further improve our modeling to get more meaningful and applicable results. The next barrier to overcome is definitely related to our lack of field expertise in the realm of the insurance market which would further allow us to make more accurate and representative models of the data. | ## Inspiration
The long and tedious process of processing insurance claims. With Machine learning capabilities from Azure and AWS, we strived to expedite this long process.
## What it does
We developed an app that allows users in accidents to take pictures of their damaged car, and from there, we used Machine learning to determine what percentage the car is damaged. From this and other information submitted by the user, we can make insurance companies' lives way easier by providing them information such as the model of the car, the current market value of the car as well as what % the car is damaged. This will ultimately help speed up their process of determining how much to pay the user for their damaged car alot quicker and the user will get cash back 10x faster.
## How I built it
We used Android studio and java.
## Challenges I ran into
Getting azure's custom Vision API to work was a big pain.
## Accomplishments that I'm proud of
Its Done!
## What I learned
Lots! We learned tons about android studio and how to use azure's custom Vision API.
## What's next for Insure
We plan to expand our target audience to target other insurances, such as house insurance, etc. | partial |
## Inspiration
After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern.
## What it does
Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions.
## How we built it
Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions.
## Challenges we ran into
The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster.
## Accomplishments that we're proud of
All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware.
## What we learned
While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting.
## What's next for Dustables
When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience. | ## Inspiration
snore or get pourd on yo pores
Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us
## What it does
It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go.
## How we built it
We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face.
## Challenges we ran into
Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it.
## Accomplishments that we're proud of
Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects.
## What we learned
We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again.
## What's next for You snooze you lose. We dont lose
Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out | ## Inspiration
Did you know the average person spends about $1,800 a year on clothing, often discarding items that contribute to the 92 million tons of textile waste in landfills annually? Enter EcoCloset, the app revolutionizing how we value and recycle our wardrobes. With just a simple photo, our innovative AI-powered solution provides instant clothing valuation and recommendations for reselling, donating, or eco-friendly disposal. We support cultural clothing and gender expression while offering a gamified experience with rewards for sustainable practices. EcoCloset addresses critical issues: only 20% of textiles are collected for reuse or recycling globally, almost 60% of all clothing material is plastic, and clothing waste significantly contributes to air pollution and health risks. Our unique approach, combining multiple databases and AI, fills a gap identified by researchers in 2023. Users love us because we tap into the growing thrifting trend, help save and make money, build a community around sustainable fashion, and offer rewards for eco-friendly choices. Join EcoCloset in closing the loop on fashion waste while fostering a more sustainable and inclusive clothing ecosystem.
## What it does
Like no other app in the industry currently as functional and feature packed, EcoCloset is a sustainability-driven platform that helps users appraise and manage their clothing more effectively while promoting eco-conscious decisions. With EcoCloset, users can:
1. Easily Upload Photos: Users can quickly snap a picture of any garment they want to evaluate, enabling effortless interaction with the platform.
2. AI-Powered Analysis: Our app uses Machine Learning and AI to analyze the garment’s brand, age, condition, and material. Based on this analysis, the app determines whether the clothing should be resold, recycled, or discarded responsibly.
3. Appraisal and Value Estimation: EcoCloset provides users with an estimated resale value for their garments, helping them make informed decisions on whether the item is worth selling or donating.
4. Location-Based Drop-Off Suggestions: The app offers convenient location detection, helping users find nearby thrift stores or donation centers to drop off their clothes, making the process of donating and recycling more accessible.
5. Gamified Experience and Rewards: EcoCloset includes a leaderboard system that tracks user activity and rewards them with coupons for thrift stores as they donate or recycle more clothes, incentivizing sustainable behavior through gamification.
6. Educational Features: The app educates users about the environmental impact of different types of materials, particularly the effects of synthetic fibers on the planet. This encourages users to donate instead of disposing of items, thereby reducing their environmental footprint.
7. By combining convenience, education, and a rewards system, EcoCloset promotes a sustainable lifestyle while helping users declutter and extend the life of their clothing.
## How we built it
We built EcoCloset using two different versions, each tailored to specific user needs and technical requirements.
The primary version uses React and Next.js for a modern, responsive front-end that allows users to interact with the app across devices. On the backend, we use Flask and Node.js to handle real-time API requests and manage data processing efficiently. Our machine learning models for evaluating clothing condition were developed in Python using Sklearn, and deployed using ONNX for optimal performance and fast inference times. We integrated the OpenAI API to provide intelligent recommendations based on the condition of the clothing, and store all user data and assessments in MongoDB. We chose Vercel for seamless, scalable deployment, and version control is handled through GitHub.
In addition to this, we built an alternate version of EcoCloset fully on Streamlit, which uses Python from end-to-end. This version focuses on simplicity and rapid prototyping. It allows users to upload clothing images directly into the Streamlit app, where a pre-trained machine learning model—developed in Sklearn—assesses the item’s resale or donation value. We chose Streamlit for its ability to quickly spin up interactive web apps and deploy Python models directly, giving us the flexibility to iterate on features fast. This alternative version runs entirely in Python, allowing for easier modifications for future development, especially in ML-heavy scenarios.
By having two versions—one built with modern web technologies like React and one with Streamlit for rapid experimentation—we’re able to adapt the app to various user requirements and deployment contexts.
## Challenges we ran into
We faced several significant challenges while developing EcoCloset. One of the biggest obstacles was setting up cron jobs for scheduled tasks like database updates and notifications. We spent a considerable amount of time troubleshooting this issue, as managing time-sensitive tasks in our system was more complex than expected.
Another major hurdle was training our machine learning model. We lacked access to GPUs and had only about 2,000 images available, which wasn’t enough for the scale of model training we originally envisioned. This forced us to look for pre-trained models that could be adapted to our needs, but finding a model that fit our exact specifications was difficult. Eventually, we had to adjust our approach and leverage ONNX to make deployment smoother.
We also encountered problems with Streamlit, our initial framework choice for a simple and quick MVP. Streamlit doesn’t have built-in support for authentication and user logins, which was crucial for our app. This limitation forced us to switch to a Next.js setup for our main version, where we could implement more complex functionality like authentication and user management. Streamlit, though great for rapid prototyping, also had a few other limitations in terms of UI flexibility and scalability, which made us rethink our framework choices.
Integrating the backend with the frontend came with its own set of difficulties. Processing the AI responses from the Flask API into a user-friendly format on the frontend was challenging, and it took several iterations to get the communication between the backend and frontend smooth and efficient.
Additionally, we had a steep learning curve with Figma as we worked on the design of our user interface. While Figma is a powerful tool, understanding how to create responsive and intuitive designs that work well on different platforms took time and effort.
These challenges taught us the importance of flexibility in choosing technologies and helped us improve our problem-solving skills across the stack.
## Accomplishments that we're proud of
We’re incredibly proud of the fact that we built our own machine learning model to analyze clothing conditions rather than taking the easy route and relying solely on pre-built APIs or just wrapping ChatGPT for this task. By developing our own model, we gained far more control over the results and were able to tailor the solution to our specific problem, which gives EcoCloset a unique edge. This decision also allowed us to better understand the complexities of evaluating clothing for resale, donation, or recycling.
Another accomplishment we’re excited about is the development of two fully functional versions of EcoCloset: one built using Next.js for a more robust user experience and another using Streamlit for rapid prototyping and experimentation. Having both versions gives us flexibility and allows us to adapt the solution to different user needs or project requirements.
But perhaps our biggest point of pride is that we are solving a real-world sustainability problem that hasn't been addressed before at this scale. By creating a platform that helps reduce clothing waste and promotes eco-friendly practices, we’re tackling an issue with significant environmental impact. The fact that we’re contributing to a solution that encourages more sustainable fashion consumption and waste management is something we’re extremely proud of.
## What we learned
One of the biggest takeaways from this project is that cron jobs are a pain in the bum! Setting up scheduled tasks turned out to be more challenging than expected, and we quickly learned how important it is to have a solid understanding of scheduling tasks for backend operations.
We also learned that training a machine learning model from scratch is no easy feat, especially when you don’t have access to GPUs or a large dataset. Finding a pre-trained model or dataset that fits your needs can also be a huge challenge. It was eye-opening to realize how much work goes into developing models for niche applications like ours, and this experience has deepened our respect for those who specialize in AI and machine learning.
## What's next for EcoCloset
Meta RayBans Intergration for easy acess without pulling out your phone (Kinda works but not fully as no SDK)
Looking ahead, there’s a lot of potential for EcoCloset to grow and make an even greater impact. One key goal is expanding our international outreach. We want to bring EcoCloset’s sustainable mission to a global audience, ensuring that users from all over the world can engage with the platform and reduce their fashion waste. We also plan to enhance the app’s diversity aspect by expanding support for more cultural clothing donations and promoting inclusivity in fashion.
Partnering with more companies that prioritize sustainability is a huge next step for us. We aim to promote eco-friendly products and offer users more coupons and rewards for making sustainable choices. We’re also committed to providing better value for users by constantly refining the platform’s features and incentives.
In terms of technology, we’re focused on training a better, more accurate model that can provide users with precise estimates of what they can get for their clothing. The goal is to ensure that if the app quotes a resale value, users can walk into a store and reliably receive that amount. This will strengthen the app’s trustworthiness and provide real-world benefits to users.
Adobe Express Add-On for ECommerce+Marketing.
At EcoCloset, our goal is to create a vibrant e-commerce marketplace and community for thrifted and gently used clothing. This will allow users to appraise, sell, showcase, and share their sustainable fashion items within a connected community.
We plan to integrate Adobe Express as an add-on, enabling users to create e-commerce content directly from their uploaded clothing photos. After receiving an item analysis in the EcoCloset app, users can seamlessly export their images to Adobe Express to design custom marketing materials such as ads, social media posts, or product listings. Using stickers, icons, data visualization, and pre-built templates, users can create engaging, professional content to promote their items.
As part of this hackathon theme we have plans to develop a prototype add-on for Adobe Express. This would allow users to generate e-commerce assets with just a few clicks, featuring customizable templates, branding options, and tools for sustainable fashion marketing.
Through this integration, EcoCloset will foster a community-driven marketplace, where users can buy, sell, and showcase their thrifted fashion items, promoting both sustainability and creativity. | partial |
## Inspiration
DermaDetect was born out of a commitment to improve healthcare equity for underrepresented and economically disadvantaged communities, including seniors, other marginalized populations, and those impacted by economic inequality.
Recognizing the prohibitive costs and emotional toll of traditional skin cancer screenings, which often result in benign outcomes, we developed an open-source AI-powered application to provide preliminary skin assessments.
This innovation aims to reduce financial burdens and emotional stress, offering immediate access to health information and making early detection services more accessible to everyone, regardless of their societal status.
## What it does
* AI-powered analysis: Fine-tuned Resnet50 Convolutional Neural Network classifier that predicts skin lesions as benign versus cancerous by leveraging the open-source HAM10000 dataset.
* Protecting patient data confidentiality: Our application uses OAuth technology (Clerk and Convex) to authenticate and verify users logging into our application, protecting patient data when users upload images and enter protected health information (PHI).
* Understandable and age-appropriate information: Prediction Guard LLM technology offers clear explanations of results, fostering informed decision-making for users while respecting patient data privacy.
* Journal entry logging: Using the Convex backend database schema allows users to make multiple journal entries, monitor their skin, and track moles over long periods.
* Seamless triaging: Direct connection to qualified healthcare providers eliminates unnecessary user anxiety and wait times for concerning cases.
## How we built it
**Machine learning model**
TensorFlow, Keras: Facilitated our model training and model architecture, Python, OpenCV, Prediction Guard LLM, Intel Developer Cloud, Pandas, NumPy, Sklearn, Matplotlib
**Frontend**
TypeScript, Convex, React.js, Shadcn (Components), FramerMotion (Animated components), TailwindCSS
**Backend**
TypeScript, Convex Database & File storage, Clerk (OAuth User login authentication), Python, Flask, Vite, InfoBip (Twillio-like service)
## Challenges we ran into
* We had a lot of trouble cleaning and applying the HAM10000 skin images dataset. Due to long run times, we found it very challenging to make any progress on tuning our model and sorting the data. We eventually started splitting our dataset into smaller batches and training our model on a small amount of data before scaling up which worked around our problem. We also had a lot of trouble normalizing our data, and figuring out how to deal with a large Melanocytic nevi class imbalance. After much trial and error, we were able to correctly apply data augmentation and oversampling methods to address the class imbalance issue.
* One of our biggest challenges was setting up our backend Flask server. We encountered so many environment errors, and for a large portion of the time, the server was only able to run on one computer. After many Google searches, we persevered and resolved the errors.
## Accomplishments that we're proud of
* We are incredibly proud of developing a working open-source, AI-powered application that democratizes access to skin cancer assessments.
* Tackling the technical challenges of cleaning and applying the HAM10000 skin images dataset, dealing with class imbalances, and normalizing data has been a journey of persistence and innovation.
* Setting up a secure and reliable backend server was another significant hurdle we overcame. The process taught us the importance of resilience and resourcefulness, as we navigated through numerous environmental errors to achieve a stable and scalable solution that protects patient data confidentiality.
* Integrating many technologies that were new to a lot of the team such as Clerk for authentication, Convex for user data management, Prediction Guard LLM, and Intel Developer Cloud.
* Extending beyond the technical domain, reflecting a deep dedication to inclusivity, education, and empowerment in healthcare.
## What we learned
* Critical importance of data quality and management in AI-driven applications. The challenges we faced in cleaning and applying the HAM10000 skin images dataset underscored the need for meticulous data preprocessing to ensure AI model accuracy, reliability, and equality.
* How to Integrate many different new technologies such as Convex, Clerk, Flask, Intel Cloud Development, Prediction Guard LLM, and Infobip to create a seamless and secure user experience.
## What's next for DermaDetect
* Finding users to foster future development and feedback.
* Partnering with healthcare organizations and senior communities for wider adoption.
* Continuously improving upon data curation, model training, and user experience through ongoing research and development. | # Inspiration
Meet one of our teammates, Lainey! Over the past three years, she has spent over 2,000 hours volunteering with youth who attend under-resourced schools in Washington state. During the sudden onset of the pandemic, the rapid school closures ended the state’s Free and Reduced Lunch program for thousands of children across the state, pushing the burden of purchasing healthy foods onto parents. It became apparent that many families she worked with heavily relied on government-provided benefits such as SNAP (Supplemental Nutrition Assistance Program) to purchase the bare necessities. Research shows that SNAP is associated with alleviating food insecurity. Receiving SNAP in early life can lead to improved outcomes in adulthood. Low-income families under SNAP are provided with an EBT (Electronic Benefit Transfer) card and are able to load a monthly balance and use it like a debit card to purchase food and other daily essentials.
However, the EBT system still has its limitations: to qualify to accept food stamps, stores must sell food in each of the staple food categories. Oftentimes, the only stores that possess the quantities of scale to achieve this are a small set of large chain grocery stores, which lack diverse healthy food options in favor of highly-processed goods. Not only does this hurt consumers with limited healthy options, it also prevents small, local producers from selling their ethically and sustainably sourced produce to those most in need. Studies have repeatedly shown a direct link between sustainable food production and food health quality.
The primary grocery sellers who have the means and scale to qualify to accept food stamps are large chain grocery stores, which often have varying qualities of produce (correlated with income in that area) that pale compared to output from smaller farms. Additionally, grocery stores often supplement their fresh food options with a large selection of cheaper, highly-processed items that are high in sodium, cholesterol, and sugar. On average, unhealthy foods are about $1.50 cheaper per day than healthy foods, making it both less expensive and less effort to choose those options. Studies have shown that lower income individuals “consume fewer fruits and vegetables, more sugar-sweetened beverages, and have lower overall diet quality”. This leads to deteriorated health, inadequate nutrition, and elevated risk for disease. In addition, groceries stores with healthier, higher quality products are often concentrated in wealthy areas and target a higher income group, making distance another barrier to entry when it comes to getting better quality foods.
Meanwhile, small, local farmers and stores are unable to accept food stamp payments. Along with being higher quality and supporting the community, buying local foods are also better for the environment. Local foods travel a shorter distance, and the structure of events like farmers markets takes away a customer’s dependency on harmful monocrop farming techniques. However, these benefits come with their own barriers as well. While farmers markets accept SNAP benefits, they (and similar events) aren’t as widespread: there are only 8600 markets registered in the USDA directory, compared to the over 62,000 grocery stores that exist in the USA. And the higher quality foods have their own reputation of higher prices.
Locl works to alleviate these challenges, offering a platform that supports EBT card purchases to allow SNAP benefit users to purchase healthy food options from local markets.
# What does Locl do?
Locl works to bridge the gap between EBT cardholders and fresh homegrown produce. Namely, it offers a platform where multiple local producers can list their produce online for shoppers to purchase with their EBT card. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers. It works like a virtual farmers market, combining the quality of small farms with the ease and reach of online shopping. It makes it easier for a consumer to buy better quality foods with their EBT card, while also allowing a greater range of farms and businesses to accept these benefits. This provides a convenient and accessible way for EBT cardholders to access healthy meals, while also promoting better eating habits and supporting local markets and farmers.
When designing our product, some of our top concerns were the technological barrier of entry for consumers and ensuring an ethical and sustainable approach to listing produce online. To use Locl, users are required to have an electronic device and internet connection, ultimately limiting access within our target audience. Beyond this, we recognized that certain produce items or markets could be displayed disproportionally in comparison to others, which could create imbalances and inequities between all the stakeholders involved. We aim to address this issue by crafting a refined algorithm that balances the search appearance frequency from a certain product based on how many similar products like such are posted.
# Key Features
## EBT Support
Shoppers can convert their EBT balance into Locl credits. From there, they can spend their credits buying produce from our set of carefully curated suppliers. To prevent fraud, each vendor is carefully evaluated to ensure they sell ethically sourced produce. Thus, shoppers can only spend their Locl credits on produce, adhering to government regulation on SNAP benefits.
## Bank-less payment
Because low-income shoppers may not have access to a bank account, we've used Checkbook.io's virtual credit cards and direct deposit to facilitate payments between shoppers and vendors.
## Producer accessibility
By listing multiple vendors on one platform, Locl is able to circumvent the initial problems of scale. Rather than each vendor being its own store, we consolidate them all into one large store, thereby increasing accessibility for consumers to purchase products from smaller vendors.
## Recognizable marketplace
To improve the ease of use, Locl's interface is carefully crafted to emulate other popular marketplace applications such as Facebook Marketplace and Craigslist. Because shoppers will already be accustomed to our app, it'll far improve the overall user experience.
# How we built it
Locl revolves around a web app interface to allow shoppers and vendors to buy and sell produce.
## Flask
The crux of Locl centers on our Flask server. From there, we use requests and render\_templates() to populate our website with GET and POST requests.
## Supabase
We use Supabase and PostgreSQL to store our product, market, virtual credit card, and user information. Because Flask is a Python library, we use Supabase's community managed Python library to insert and update data.
## Checkbook.io
We use Checkbook.io's Payfac API to create transactions between shoppers and vendors. When people create an account on Locl, they are automatically added as a user in Checkbook with the `POST /v3/user` endpoint. Meanwhile, to onboard both local farmers and shoppers painlessly, we offer a bankless solution with Checkbook’s virtual credit card using the `POST /v3/account/vcc` endpoint.
First, shoppers deposit credits into their Locl account from the EBT card. The EBT funds are later redeemed with the state government by Locl. Whenever a user buys an item, we use the `POST /v3/check/digital` endpoint to create a transaction between them and the stores to pay for the goods. From there, vendors can also spend their funds as if it were a prepaid debit card. By using Checkbook’s API, we’re able to break down the financial barrier of having a bank account for low-income shoppers to buy fresh produce from local suppliers, when they otherwise wouldn’t have been able to.
# Challenges we encountered
Because we were all new to using these APIs, we were initially unclear about what actions they could support. For example, we wanted to use You.com API to build our marketplace. However, it soon became apparent that we couldn't embed their API into our static HTML page as we'd assume. Thus, we had to pivot to creating our own cards with Jinja.
# Looking forward
In the future, we hope to advance our API services to provide a wider breadth of services which would include more than just produce from local farmers markets. Given a longer timeframe, a few features we'd like to implement include:
* a search and filtering system to show shoppers their preferred goods.
* an automated redemption system with the state government for EBT.
* improved security and encryption for all API calls and database queries.
# Ethics
SNAP (Supplemental Nutrition Assistance Program), otherwise known as food stamps, is a government program that aids low-income families and individuals to purchase food. The inaccessibility of healthy foods is a pressing problem because there is a small number of grocery stores that accept food stamps, which are often limited to large, chain grocery stores that are not always accessible. Beyond this, these grocery stores often lack healthy food options in favor of highly-processed goods.
When doing further research into this issue, we were fortunate to have a team member who has knowledge about SNAP benefits through firsthand experience in classroom settings and at food banks. Through this, we learned about EBT (Electronic Benefit Transfer) cards, as well as their limitations. The only stores that can support EBT payments must offer a selection for each of the staple food categories, which prevents local markets and farmers from accepting food stamps as payment.
To tackle this issue of the limited accessibility of healthy foods for SNAP benefit users, we came up with Locl, an online platform that allows local markets and farmers to list fresh produce for EBT cardholders to purchase with food stamps. When creating Locl, we adhered to our goal of connecting food stamp users with healthy, ethically sourced foods in a sustainable manner. However, there are still many ethical challenges that must be explored further.
First, to use Locl, users would require a portable electronic device and an internet connection due to it being an online platform. The Pew Research center states that 29% of adults with incomes below $30,000/year do not have access to a smartphone and 44% do not have portable internet access. This would greatly lessen the range of individuals that we aim to serve.
Second, though Locl aims to serve SNAP beneficiaries, we also hope to aid local markets and farmers by increasing the number of potential customers. However, Locl runs the risk of displaying certain produce items or marketplaces disproportionately in comparison to others, which could create imbalances and inequities between all stakeholders involved. Furthermore, this display imbalance could limit user knowledge about certain marketplaces.
Third, Locl aims to increase ethical consumerism by connecting its users with sustainable markets and farmers. However, there arises the issue of selecting which markets and farmers to support on our platform. While considering baselines that we would expect marketplaces to meet to be displayed on Locl, we recognized that sustainability can be measured through a wide number of factors- labor, resources used, pollution levels, and began wondering whether we prioritize sustainability of items we market or the health of users. One example of this is meat, a popular food product which is known for its high health benefits, but similarly high water consumption and greenhouse gas levels. Narrowing these down could greatly limit the display of certain products.
Fourth, Locl does not have an option for users to filter the results that are displayed to them. Many EBT cardholders say that they do not use their benefits to make online purchases due to the difficulty of finding items on online store pages that qualify for their benefits as well as their dietary needs. Thus, our lack of a filter option would cause certain users to have increased difficulty in finding food options for themselves.
Our next step for Locl is to address the ethical concerns above, as well as explore ways to make it more accessible and well-known. However, there are still many components to consider from a sociotechnical lens. Currently, only 4% of SNAP beneficiaries make online purchases with their EBT cards. This small percentage may stem from reasons that range from lack of internet access, to not being aware that online options are available. We hope that with Locl, food stamp users will have increased access to healthy food options and local markets and farmers will have an increased customer-base.
# References
<https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2019.305325>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://www.ibisworld.com/industry-statistics/number-of-businesses/supermarkets-grocery-stores-united-states/> <https://www.masslive.com/food/2022/01/these-are-the-top-10-unhealthiest-grocery-items-you-can-buy-in-the-united-states-according-to-moneywise.html>
<https://farmersmarketcoalition.org/education/qanda/>
<https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-6546-2>
<https://news.climate.columbia.edu/2019/08/09/farmers-market-week-2019/> | ## FlyBabyFly
[link](https://i.imgur.com/9uIFF1K.png)
[link](https://i.imgur.com/O55PRJE.jpg)
## What it does
Its a mobile app that matches people with travel destinations while providing great deals.
Swipe right to like! Swipe left to see next deal.
After you like a city you can learn about the city's attractions, landmarks and history. It can help you book right from the app, just pick a date!
## How we built it
We used React Native and JetBlue's provided data.
## Accomplishments that we're proud of
We believe this is a refreshing approach to help people discover new cities and help them plan their next vacation
## What's next for FlyBabyFly
* Add vacation packages that include hotel bundles for better prices. | partial |
Investormate is a automated financial assistant for investors, providing them financial data
Investormate provides a simple, easy-to-use interface for retrieving financial data from any company on the NASDAQ. It can retrieve not only the prices, but it also calculates technical indicators such as the exponential moving average and stochastic oscillator values.
It has a minimalist design and provides data in a very clean fashion so users do not have to dig tons of financial information to get what they want. Investormate also interprets natural language so no complex syntax is required to make a query. | ## Inspiration
Our project aims to democratize algorithmic trading and the data associated with it to capture a 150 billion dollar a month trade volume and aim it towards transparency and purpose. Our project is a culmination of the technical depth and breadth of a new step forward in technology. We really wanted to open up the centralized and ever exclusive profession of algorithmic trading, giving the average retail trader the same tools and data as billion dollar companies. Empowering curiosity and innovation through open sourced data and tools.
## What it does
Our project is an brokerage platform that hosts compute, data APIs, processing tools, algorithms, and data-stream. A mix of the usability of Robinhood with the decentralized community building of technical people like Kaggle. We transform strategies and ideas that require a huge amount of capital and expertise, and hand it to the every-day retail investor. When customers upload code, our platform allows them to test their ideas out on paper trades, and when they are confident enough, we host and execute with a small fee on our infrastructure.
There are two distinct properties of our project that stand out as unique:
1. **Algorithm Marketplace**: Anyone can create an algorithm and earn commission by allowing others to run the algorithm. This means investors can invest in unorthodox and unique public algorithms, which removes all financial and technological barriers young analysts and programmers without any capital might face. By doing so, the project opens up the investment community to a new diverse and complex set of financial products.
2. **Collaborative Data Streams**: All users are encouraged to connect their individual data streams, growing a community that helps each other innovate and develop refined and reliable sources of data. This can serve as both a gateway to accessibility and a lens of transparency, enabling users to track and encourage responsible investments, allowing users to monitor and invest in entities that emphasize certain causes such as sustainability or other social movements.
## How we built it
Our project was specifically made in two stages, the framework and then the use cases. We used our combined experience at Bloomberg and Amazon working with this type of data to create a framework that is both highly optimized and easy to use. Below, we highlight three use case examples that were developed on our platform.
## Use Case 1: Technical Analysis Using Independent Component Analysis (ICA)
1. **Traditional Stock Analysis Using ICA**:
* We utilize **Independent Component Analysis (ICA)** to decompose a data matrix 𝑋 (observations) into independent components. The goal is to estimate 𝐴 (mixing matrix) and 𝑆 (source signals), assuming 𝑆 contains statistically independent components.
* ICA maximizes non-Gaussianity (e.g., using kurtosis or negentropy) to ensure independence, allowing us to identify independent forces or components in mixed signals that contribute to changes in the overall system.
2. **Cosine Similarity Between Stocks**:
* By analyzing the independent components driving the stock prices, we perform **cosine similarity** between them. This generates a value within the range of [-1, 1], representing how much any two stocks share these independent components.
3. **Dynamic Graph Representation**:
* We build an **updating graph** based on the relationships derived from the cosine similarity, providing real-time insight into how stocks are interrelated through their independent components.
## Use Case 2: Prediction Algorithm
* Our second use case involves a **prediction algorithm** that tracks stock movement and applies trend-based estimation across various stocks.
* This demonstrates a **low latency real-time application**, emphasizing the capability of **SingleStore** for handling real-time database operations, and showing how the platform can support high-speed, real-time financial data processing.
## Challenges we ran into
We encountered several challenges, including latency issues, high costs, and difficulties with integrating real-time data processing due to rate limits and expenses. Another hurdle was selecting the right source for real-time stock data, as both maintaining the database and processing the data were costly, with the data stream alone costing nearly $60.
## Accomplishments that we're proud of
We collectively managed to create a framework that is impressive on a technical scale and scalable as we look into the future of this project.
## What we learned
We gained experience with data normalization techniques for stock data and learned how to sync time series datasets with missing information. We also had to think deeply about the scalability of our platform and the frictionless experience we wanted to present.
## What's next for The OpenTradeLab
We have several initiatives we're excited to work on:
1. **Growing Communities Around Social Investments**:
* We aim to explore sustainable ways to build and foster communities focused on investment in social causes.
2. **Direct Exchange Connectivity**:
* We're looking into the possibility of connecting directly to an exchange to enable real-time trade routing.
3. **Optimized Code Conversion**:
* We plan to develop an API and library that converts Python code into optimized C++ code for enhanced performance.
4. **Investment Safeguards**:
* Implementing safeguards to promote responsible and secure investment practices is another key area of focus. | ## Inspiration
We all know that great potential lies within the stock markets, but how many of us have the time and money to put into investments? With Minvest anyone can start investing with no minimum portfolio balance, and no prior experience with investments needed. Our platform is well integrated to your bank account, and you decide how much money to invest in, with the option to withdraw any amount at any time. Sit back and watch your investments grow as we expertly manage a well diversified portfolio on your behalf.
## What it does
Minvest is an application which uses algorithmic trading to manage clients' investment portfolios. The client simply transfers any amount of money from their bank account into the investment platform, and our algorithms take care of the rest. Each user is classified to a certain investment "style" or "strategy", depending on their own personal preferences, and based on these profiles, our algorithms pick out the best investments, and perform appropriate trades on the stock market when the timing is right. Our platform is a form of "crowd investing" in that users pool their money together in order to make investments that they normally would not be able to on their own. There are two current challenges with investments: either one cannot afford to make the minimum investments (usually minimum 100 shares, or a minimum dollar investment amount), or one cannot afford to diversify their portfolio in order to minimize their risks and maximize their opportunities. With Minvest, users now have access to investments previously out of their reach, and with trading and portfolios managed through algorithms, the management cost stays low, allowing us to offer no minimum investment required for our users. We believe that with this platform, more individuals will be able to benefit from the markets, as well as have a financial peace of mind that their money is expertly managed and will grow in the future.
## How we built it
Our front-end client is an Android application, and our back end is created with Django. We used several APIs in our back-end to build our services algorithms. Using Capital One Nessie API we tightly integrate users' bank accounts, and allow them to easily transfer money from their bank into the platform, or withdraw from the platform and deposit into their bank accounts. In creating our algorithm that determines which securities to build our portfolio with, we used the Yahoo Finance API, which brings us key performance indicators of securities in which we analyze to select the best investments. The actual trading of these securities would be done through the Zipline API, which is built with Python. This API is used to create professional trading algorithms, and allows us to build and backtest our algorithms to 10 years of historical data, with full performance and risk indicators. Displaying all this information is our Android client, which gives users the commands to deposit into and withdraw from the investment platform, to view their book value, portfolio value, change, and percentage change.
## Challenges we ran into
The greatest challenge we ran into was developing our algorithms. We had to do in depth research about investment performance indicators, from P/E and P/B ratios to alpha beta risk ratios, to how book values are properly calculated when it comes to making multiple trades with different securities at different prices. However, once we got a hang of it, we were able to identify important traits of the securities that would help us make investment decisions.
## Accomplishments that we're proud of
We're proud of the fact that we were able to learn a lot about investments, as well as being able to implement a few different APIs together to make a final product!
## What we learned
We learned a lot not only about investments, but applied our learning to create algorithms.
## What's next for Minvest
Next steps would be to further develop our algorithms to be able adapt to more market situations, and to allow for users who may be more advanced to have the option of further narrowing down what industries they would like to invest in, such as financial services, utilities, commodities, etc. | losing |
# BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | ## Inspiration
We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing.
## What it does
We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery!
## How we built it
We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms.
## Challenges we ran into
We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept.
..also sleep 🥲
## Accomplishments that we're proud of
We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project.
## What we learned
3 of us have never been to a hackathon before!
3 of us never used Flask before!
All of us have never worked together before!
From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER).
## What's Next for Handwriting Teacher
Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress. | # mindr
mindr is a cross-platform emotion monitoring app, essential for every smart home. The app
is tailored to provide an unintrusive and private solution to keep track of a child's well-being during all those times we can't be with them.
## How it works
The emotion monitoring happens on a device—a laptop, or any IoT device that supports
a camera—locally. Each frame from the camera is processed for emotional content
and severely emotional distress. The emotion processing is facilitated using
OpenCV, TensorFlow, and was trained using a convolutional neural network within
TFLearn. All of the image data is stored locally, to protect a parent's privacy.
The emotional data is then sent to a Django backend web-server where it is parsed
into a PostgreSQL database. The Django web-server then serves analytical data to
a React/Redux single-page web app which provides parents with a clean interface
for tracking their child's emotional behavior through time. Distressing emotional
events are expedited to the parent by sending notifications through SMS or email.
## Who we are
We are a team of upper-division computer science students passionate about creating
useful and well-made applications. We love coffee and muffins. | winning |
Copyright 2018 The Social-Engineer Firewall (SEF)
Written by Christopher Ngo, Jennifer Zou, Kyle O'Brien, and Omri Gabay.
Founded Treehacks 2018, Stanford University.
## Inspiration
No matter how secure your code is, the biggest cybersecurity vulnerability is the human vector. It takes very little to exploit an end-user with social engineering, yet the consequences are severe.
Practically every platform, from banking to social media, to email and corporate data, implements some form of self-service password reset feature based on security questions to authenticate the account “owner.”
Most people wouldn’t think twice to talk about their favourite pet or first car, yet such sensitive information is all that stands between a social engineer and total control of all your private accounts.
## What it does
The Social-Engineer Firewall (SEF) aims to protect us from these threats. Upon activation, SEF actively monitors for known attack signatures with voice to speech transcription courtesy of SoundHound’s Houndify engine. SEF is the world’s first solution to protect the OSI Level 8 (end-user/human) from social engineer attacks.
## How it was built
SEF is a Web Application written in React-Native deployed on Microsoft Azure with node.js. iOS and Android app versions are powered by Expo. Real-time audio monitoring is powered by the Houndify SDK API.
## Todo List
Complete development of TensorFlow model
## Development challenges
Our lack of experience with new technologies provided us with many learning opportunities. | ## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages. | ## Inspiration
As technological innovations continue to integrate technology into our lives, our personal property becomes more and more tied to the digital world. Our work, our passwords, our banking information, and more are all stored in our personal computers. Security is ever more imperative—especially so at a university campus like Princeton's, where many students feel comfortable leaving their laptops unattended.
## What it does
Our solution is Cognito—an application that alerts users if their laptops are being used by other people. After launching the app, the user simply types in their phone number, gets in position for a clear photo, and at the press of a button, Cognito will now remember the user's face. If someone other than the main user tries to use the laptop without the user being in frame, Cognito sends a text message to the user, alerting them of the malicious intruder.
## How we built it
The application was written in Java using Eclipse and Maven. We used [Webcam Capture API](https://github.com/sarxos/webcam-capture) to interface with the laptop webcam and [OpenIMAJ](http://openimaj.org/) to initially detect faces. Then, using Microsoft Cognitive Services ([Face API](https://azure.microsoft.com/en-us/services/cognitive-services/face/)), we compare all of the faces in frame to the user's stored face. Finally, if an intruder is detected, we integrated with [Twilio](https://www.twilio.com/) to send an SMS message to the user's phone.
## Challenges we ran into
Ranged from members' adversity to sleep deprivation. Also, the cold (one of our group members is from the tropics).
On a more serious note, for many of us, this was our first time participating in a Hackathon or working on integrating APIs. A significant portion of the time was spent choosing and learning to use them, and even after that, working on bug fixes in the individual components took up a substantial chunk of time.
## Accomplishments that we're proud of
Most of us did not have any experience working with APIs prior, much less under the time constraint of a Hackathon. Thus, the feeling of seeing the final working product that we had worked so hard to develop functioning as intended was pretty satisfying.
## What we learned
We did not have significant project experience outside of class assignments, and so we all learned a lot about APIs, bug fixing, and the development process in general.
## What's next for Cognito
Things we'd like to do:
* Hold a *database* of authorized "friends" that won't trigger a warning (in addition to just the core user).
* Provide more choice regarding actions triggered by detection of an unauthorized user (e.g. sending a photo of the malicious user to the main user, disabling mouse & keyboard, turning off screen, system shutdown altogether).
* Develop a clean and more user-friendly UI. | winning |
## Inspiration
**Powerful semantic search for your life does not currently exist.**
Google and ChatGPT have brought the world’s information to our fingertips, yet our personal search engines — Spotlight on Mac, and search on iOS or Android — are insufficient.
Google Assistant and Siri tried to solve these problems by allowing us to search and perform tasks with just our voice, yet their use remains limited to a narrow range of tasks. **Recent advancement in large language models has enabled a significant transformation in what's possible with our devices.**
## What it does
That's why we made Best AI Buddy, or BAIB for short.
**BAIB (pronounced "babe") is designed to seamlessly answer natural language queries about your life.** BAIB builds an index of your personal data — text messages, emails, photos, among others — and runs a search pipeline on top of that data to answer questions.
For example, you can ask BAIB to give you gift recommendations for a close friend. BAIB looks pertinent interactions you've had with that friend and generates gift ideas based on their hobbies, interests, and personality. To support its recommendations, **BAIB cites parts of past text conversations you had with that friend.**
Or you can ask BAIB to tell you about what happened the last time you went skiing with friends. BAIB intelligently combines information from the ski group chat, AirBnB booking information from email, and your Google Photos to provide you a beautiful synopsis of your recent adventure.
**BAIB understands “hidden deadlines”** — that form you need to fill out by Friday or that internship decision deadline due next week — and keeps track of them for you, sending you notifications as these “hidden deadlines” approach.
**Privacy is an essential concern.** BAIB currently only runs on M1+ Macs. We are working on running a full-fledged LLM on the Apple Neural Engine to ensure that all information and processing is kept on-device. We believe that this is the only future of BAIB that is both safe and maximally helpful.
## How we built it
Eventually, we plan to build a full-fledged desktop application, but for now we have built a prototype using the SvelteKit framework and the Skeleton.dev UI library. We use **Bun as our TypeScript runtime & toolkit.**
**Python backend.** Our backend is built in Python with FastAPI, using a few hacks (check out our GitHub) to connect to your Mac’s contacts and iMessage database. We use the Google API to connect to Gmail + photos.
**LLM-guided search.** A language model makes the decisions about what information should be retrieved — what keywords to search through different databases — and when to generate a response or continue accumulating more information. A beautiful, concise answer to a user query is often a result of many LLM prompts and aggregation events.
**Retrieval augmented generation.** We experimented with vector databases and context-window based RAG, finding the latter to be more effective.
**Notifications.** We have a series of “notepads” on which the LLM can jot down information, such as deadlines. We then later use a language model to generate notifications to ensure you don’t miss any crucial events.
## Challenges we ran into
**Speed.** LLM-guided search is inherently slow, bottlenecked by inference performance. We had a lot of difficulty filtering data before giving it to the LLM for summarization and reasoning in a way that maximizes flexibility while minimizing cost.
**Prompt engineering.** LLMs don’t do what you tell them, especially the smaller ones. Learning to deal with it in a natural way and work around the LLMs idiosyncrasies was important for achieving good results in the end.
**Vector search.** Had issues with InterSystems and getting the vector database to work.
## Accomplishments that we're proud of
**BAIB is significantly more powerful than we thought.** As we played around with BAIB and asked fun questions like “what are the weirdest texts that Tony has sent me?”, its in-depth analysis on Tony’s weird texts were incredibly accurate: “Tony mentions that maybe his taste buds have become too American… This reflection on cultural and dietary shifts is interesting and a bit unusual in the context of a casual conversation.” This has increased our conviction in the long-term potential of this idea. We truly believe that this product must and will exist with or without us.
**Our team organization was good (for a hackathon).** We split our team into the backend team and the frontend team. We’re proud that we made something useful and beautiful.
## What we learned
Prompt engineering is very important. As we progressed through the project, we were able to speed up the response significantly and increase the quality by just changing the way we framed our question.
ChatGPT 4.0 is more expensive than we thought.
Further conviction that personal assistants will have a huge stake in the future.
Energy drinks were not as effective as expected.
## What's next for BAIB
Building this powerful prototype gave us a glimpse of what BAIB could really become. We believe that BAIB can be integrated into all aspects of life. For example, integrating with other communication methods like Discord, Slack, and Facebook will allow the personal assistant to gain a level of organization and analysis that would not be previously possible.
Imagine getting competing offers at different companies and being able to ask BAIB, who can combine the knowledge of the internet with the context of your family and friends to help give you enough information to make a decision.
We want to continue the development of BAIB after this hackathon and build it as an app on your phone to truly become the Best AI Buddy. | ## Inspiration
A close friend of ours was excited about her future at Stanford and beyond– she had a supportive partner, and a bright future ahead of her. But when she found out she was unexpectedly pregnant, her world turned upside down. She was shocked and scared, unsure of what to do. She knew that having a child right now wasn't an option for her. She wasn't ready, financially or emotionally, to take on the responsibility of motherhood. But with the recent overturn of Roe v Wade, she wasn't sure what her options were for an abortion.
She turned to ChatGPT for answers, hoping to find accurate and reliable information. She typed in her questions, and the AI-powered language model responded with what seemed like helpful advice.
But as she dug deeper into the information she was getting, she began to realize that not all of it was accurate. The sources that ChatGPT was referring to for clinics were in locations where abortion was no longer legal. She started to feel overwhelmed and confused. She didn't know who to trust or where to turn for accurate information about her options. She felt trapped like her fate was being decided by forces beyond her control.
With that, we realized that ChatGPT and its underlying technology (GPT3) was incredibly powerful, but had extremely systematic and foundational flaws. These are technologies that now millions are beginning to rely on, but it struggles with issues that are intrinsic to the value it’s meant to provide. We knew that it was necessary to build something better, safer, more accurate, and leveraged tools – specifically retrieval augmentation – in order to improve accuracy and provide responses based on information that the system hasn’t been trained on (for instance events and content since 2021). Enter Golden Retriever.
## What it does
Imagine having access to an intelligent assistant that can help you navigate the vast sea of information out there. In many ways, we have that with ChatGPT and GPT3, but Golden Retriever, our tool, puts an end to character limitations on prompts/queries, eliminates the risk of “hallucination,” meaning answering questions incorrectly and inaccurately but confidently, and answers the questions you need to be answered (including and especially when you probe it) with incredible depth and detail. Further, it allows you to provide sources you’d want it to analyze, whereas current GPT tools are limited to information it has been trained on. Retrieval augmentation is a game-changing technology, and entirely revolutionizes the way we approach closed-domain question answering.
That’s why we built Golden Retriever. How does Golden Retriever circumvent these challenges? Golden Retriever uses a data structure that allows for a larger prompt size and gives you the freedom to connect to any external data source.
The use case we envision as incredibly pertinent in today’s world is legal counsel – traditionally, it’s expensive, inaccessible, and is a resource that most underrepresented and marginalized communities in the United States don’t have adequate access to. Golden Retriever is a revolutionary tool that can provide you with reliable legal advice when you can't afford to consult a legal professional. Whether you're facing a legal issue but don't have the time or money to consult a lawyer, or you simply want to gain a better understanding of your legal rights and responsibilities, such as when it comes to abortion, Golden Retriever can help.
As it pertains to this use case, with Golden Retriever, you can easily connect to a wide range of external data sources, including legal databases, court cases, and legal articles, to obtain accurate and up-to-date information about the legal issue you're facing. You can ask specific legal questions and receive detailed responses that take into account the context and specifics of your situation. You can even probe it to get specific advice based on your personal circumstances.
For example, imagine you're facing a difficult decision related to abortion, but you don't have the resources to consult a legal professional. Using Golden Retriever which leverages GPT Index, you can input your query and obtain a detailed response that outlines your legal rights and responsibilities, as well as any potential legal remedies available to you – it all simply depends on the information you’re looking for and ask.
## How we built it
First, we loaded in the data using a Data Connector called SimpleDirectoryReader, which parses over a specified directory containing files that contain the data. Then, we wrote a Python script where we used GPTKeywordTableIndex as the interface that would connect our data with a GPT LLM using GPTIndex. We feed the pre-trained LLM with a large corpus of text that acts as the knowledge database of the GPT model. Then we group chunks of the textual data into nodes and extract the keywords from the nodes, also building a direct mapping from each keyword back to its corresponding node.
Then we prompt the user for a query in a GUI created in Flask. GPT Keyword Table Index gets a list of tuples that contain the nodes that store chunks of relevant textual data. We extract relevant keywords from the query and match those with pre-extracted node keywords to fetch the corresponding nodes. Once we have the nodes, we prepend the information in the nodes to the query and feed that into GPT to create an object response. This object contains the summarized text that will be displayed to the user and all the nodes' information that contributed to this summary. We are able to essentially cite the information we display to the user, where each node is uniquely identified by a Doc id.
## Challenges we ran into
When we gave a query and found a bunch of keywords on pre-processed nodes, it wasn’t hard to generate an effective response, but it was hard to find the source of the text, and finding what chunks of data from our database our system was using to construct a response. Meaning, one of the key features of our product was that the response shows exactly what information from our database it used to derive the conclusion it came to — generally, the system is “memoryless” and cannot be asked for effective and detailed follow-up questions about where it specifically generated that information. Nevertheless, we overcame this challenge and found out how to access the object where the data of the source leveraged for the response was being sourced.
## Accomplishments that we're proud of
Hallucination is considered one of the more dangerous and hard-to-diagnose issues within GPT3. When asked for sources to back up answers, GPT3 is capable of hallucinating sources, providing entirely rational-sounding justifications for its answers. Further, to properly prompt these tools to create unique and well-crafted answers, detailed prompts are necessary. Often, we’d even want prompts to leverage research articles, news articles, books, and extremely large data sets. Not only have we reduced hallucination to a negligible degree, but we’ve eliminated the limitations that come with maximum query sizes, and enabled any type of source and any quantity of sources to be leveraged in query responses and analyses.
This is a new frontier, and we’re excited and honored to have the privilege of bringing it about.
## What we learned
Our team members learned the entire lifecycle of a project in such a nascent phase – from researching current discoveries and work to building and leveraging these tools in unison with our own goals, to eventually using these outputs to make them easy to interface and interact with. When it comes to human-facing technologies such as chatbots and question-answering, human feedback and interaction are vital. In just 36 hours, we replicated this entire lifecycle, from the ideation phase to research, to build on top of current infrastructures, to developing new APIs and interfaces that are easy and fun to interact with. Given that one of the problems we’re attempting to solve is inaccessibility, doing so is vital.
## What's next for Golden Retriever: Retrieval Augmented GPT
Our current application of choice for Golden Retriever is making legal counsel more accurate, accessible, and affordable to broader audiences whether it comes to criminal or civil law. However, we genuinely see Golden Retriever as being an application to almost any use case – namely and most directly education, content generation for marketing and writing, and medical diagnoses. The guarantee we obtain from all inferences, answers, and analyses being backed by sources, and being able to feed in sources through retrieval augmentation that the system wasn’t even trained on, broadens the array of use cases beyond what we might have ever envisioned prior for AI chatbots. | ## Inspiration
it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks.
for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>)
this enables a new paradigm self-assembling software: machines controlling machines.
what if we could harness this to make our own lives better -- a lil LLM that works for you?
## What it does
i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf.
it's just like talking to your friend and asking them to help you out.
## How we built it
a lot of prompt engineering + few shot prompting.
## What's next for jarbls
shopping, logistics, research, etc -- possibilities are endless
* more integrations !!!
the capabilities explode exponentially with the number of integrations added
* long term memory
come by and i can give you a demo | losing |
## Inspiration
We were inspired by our passion for mental health awareness, journaling, and giraffes to create Giraffirmations. During this time of isolation, we found ourselves falling into negative mindsets and allowing feelings of hopelessness to creep in. This greatly impacted our mental health and we saw that journalling centred around gratitude helped to improve our attitudes. We also found ourselves spending hours in front of our computers and thought that it would be a good idea to allow for quick journalling breaks right from our favourite browser.
## What it does
Giraffirmations prompts users to reflect on positive experiences and promotes feelings of gratitude. Users can jot down their feelings and save them for future references, reinforcing happy thought patterns!
There is also a hidden easter egg for additional fun surprises to boost the user's mood :)
## How we built it
* 60% JavaScript
* 25% HTML
* 15% CSS
* 110% passion and fun! (plus some helpful APIs)
## Challenges we ran into
* Implementing tabs within the extension
* Using Chrome Storage Sync API
* Retrieving the world date and time using JavaScript
* Controlling Youtube ad frequencies
## Accomplishments that we're proud of
* Learning JavaScript in a day
* Working in a team of 2!
* Learning how to randomize link destinations
* Coming up with a great extension name
## What we learned
* Chrome Storage Sync API is HARD
* Colours and fonts matter
* Version control is a lifesaver
## What's next for Giraffirmations
* Showing all of the user's previous entries
* Implementing reminder notifications to journal
* Gamification aspects (growing a Giraffe through positivity!)
* Dark mode! | ## Inspiration
Whether you’re thriving in life or really going through it, research shows that writing down your thoughts throughout the day has many benefits. We wanted to add a social element to this valuable habit and build a sense of community through sharing and acknowledging each other’s feelings. However, even on the internet, we've noticed that it is difficult for people to be vulnerable for fear of judgement, criticism, or rejection.
Thus, we centred our problem around this challenge and asked the question: How might we create a sense of community and connection among journalers without compromising their sense of safety and authenticity when sharing their thoughts?
## What it does
With Yapyap, you can write daily journal entries and share them anonymously with the public. Before posting, our AI model analyzes your written entry and provides you with an emotion, helping to label and acknowledge your feelings.
Once your thoughts are out in the world, you can see how other people's days are going too and offer mutual support and encouragement through post reactions.
Then, the next day comes, and the cycle repeats.
## How we built it
After careful consideration, we recognized that most users of our app would favour a mobile version as it is more versatile and accessible throughout the day. We used Figma to create an interesting and interactive design before implementing it in React Native. On the backend, we created an API using AWS Lambda and API Gateway to read and modify our MongoDB database. As a bonus, we prepared a sentimental analyzer using Tensorflow that could predict the overall mood of the written entry.
## Challenges we ran into
Learning new technologies and figuring out how to deploy our app so that they could all communicate were huge challenges for us.
## Accomplishments that we're proud of
Being able to apply what we learned about the new technologies in an efficient and collaborative way. We're also proud of getting a Bidirectional RNN for sentiment analysis ready in a few hours!
## What we learned
How to easily deal with merge conflicts, what it's like developing software as a group, and overall just knowing how to have fun even when you're pulling an all-nighter!
## What's next for yapyap
More personable AI Chatbots, and more emotions available for analysis! | ## Inspiration
One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track.
## What it does
Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community!
## How we built it
React front-end, MongoDB, Express REST server
## Challenges we ran into
Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics.
## Completion
In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics.
## What we learned
Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch!
## What's next for IDNI - I Don't Need It!
We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store! | partial |
## Inspiration
A therapeutic app that's almost as therapeutic as it was to make, "dots" is a simple web app that only takes your name and as input, and outputs some positive words and reassuring energy.
## What it does
This app will quell your deepest insecurities, and empower you to carry on with your day and do your best!
## How I built it
Very simple html, css, sass, and javascript.
## Challenges I ran into
Learning everything, especially javascript
## Accomplishments that I'm proud of
My first app! Yay!
## What I learned
If you code long enough you can completely forget that you need to pee.
## What's next for dots
Next steps: add some animated elements! | ## Inspiration
With many people's stressful, fast-paced lives, there is not always time for reflection and understanding of our feelings. Journaling is a powerful way to help reflect but it can often be not only overwhelming to start on a blank page, but hard to figure out even what to write. We were inspired to provide the user with prompts and questions to give them a starting point for their reflection. In addition, we created features that summarized the user's thoughts into a short blurb to help them contextualize and reflect on their day and emotions.
## What it does
Whispers is an app that aims to help the user talk about and decipher their emotions by talking with Whisper, the app mascot, as the adorable cat prompts you with questions about your day in an effort to help you reflect and sum up your emotions. Through the use of randomly generated prompts, Whispers collects your responses to create a summary of your day and tries to help you figure out what emotions you are feeling.
## How we built it
The back-end was developed with Node.js libraries and the Co:here API to collect the user-inputted data and summarize it. Furthermore, the classify functionality from Co:here was used to conduct sentiment analysis on the user's inputs to determine the general emotions of their answers
The front-end was developed with React-native and UX designs from Figma. Express.js was used as middleware to connect the front-end to back-end.
## Challenges we ran into
Overall, we did not run into any new challenges when it comes to development. We had issues with debugging and connecting the front-end to the back-end. There was also trouble with resolving dependencies both within back--end and conducting requests from front-end for analysis. We resolved this using Express.js as a simple, easy-to-use middleware.
In addition, we had trouble translating some of our UI elements to the front-end. To resolve this, we used both online resources and re-adjusted our scope.
## Accomplishments that we're proud of
We are proud of the model that was made for our backend which was trained on a massive dataset to summarize the user's thoughts and emotions. Additionally, we are proud of the chat feature that prompts the user with questions as they go along and our overall UI design. Our team worked hard on this challenging project.
## What we learned
We learned a lot about front-end development and adjusting our scope as we go along. This helped us learn to resolve problems efficiently and quickly. We also learned a lot while working with our backend of how to train Co:here models and connect both the front-end and back-end and perform requests to the server.
## What's next for Whispers
We would like to continue to develop Whispers with several more features. The first would be a user authentication and login system so that it could be used across several platforms and users. Additionally, we would like to add voice-to-text functionality to our chat journaling feature to improve on the accessibility of our app. Lastly, we would like to expand on the archive functions so that users can go back further than just the current week and see their journals and emotions from previous weeks or months | # Inspiration
There are variety of factors that contribute to *mental health* and *wellbeing*. For many students, the stresses of remote learning have taken a toll on their overall sense of peace. Our group created **Balance Pad** as a way to serve these needs. Thus, Balance Pads landing page gives users access to various features that aim to improve their wellbeing.
# What it does
Balance Pad is a web-based application that gives users access to **several resources** relating to mental health, education, and productivity. Its initial landing page is a dashboard tying everything together to make a clear and cohesive user experience.
### Professional Help
>
> 1. *Chat Pad:* The first subpage of the application has a built in *Chatbot* offering direct access to a **mental heath professional** for instant messaging.
>
>
>
### Productivity
>
> 1. *Class Pad:* With the use of the Assembly API, users can convert live lecture content into text based notes. This feature will allow students to focus on live lectures without the stress of taking notes. Additionally, this text to speech aide will increase accessibility for those requiring note takers.
> 2. *Work Pad:* Timed working sessions using the Pomodoro technique and notification restriction are also available on our webpage. The Pomodoro technique is a proven method to enhance focus on productivity and will benefit students
> 3. *To Do Pad:* Helps users stay organized
>
>
>
### Positivity and Rest
>
> 1. *Affirmation Pad:* Users can upload their accomplishments throughout their working sessions. Congratulatory texts and positive affirmations will be sent to the provided mobile number during break sessions!
> 2. *Relaxation Pad:* Offers options to entertain students while resting from studying. Users are given a range of games to play with and streaming options for fun videos!
>
>
>
### Information and Education
>
> 1. *Information Pad:* is dedicated to info about all things mental health
> 2. *Quiz Pad:* This subpage tests what users know about mental health. By taking the quiz, users gain valuable insight into how they are and information on how to improve their mental health, wellbeing, and productivity.
>
>
>
# How we built it
**React:** Balance Pad was built using React. This allowed for us to easily combine the different webpages we each worked on.
**JavaScript, HTML, and CSS:** React builds on these languages so it was necessary to gain familiarity with them
**Assembly API:** The assembly API was used to convert live audio/video into text
**Twilio:** This was used to send instant messages to users based on tracked accomplishments
# Challenges we ran into
>
> * Launching new apps with React via Visual Studio Code
> * Using Axios to run API calls
> * Displaying JSON information
> * Domain hosting of Class Pad
> * Working with Twilio
>
>
>
# Accomplishments that we're proud of
*Pranati:* I am proud that I was able to learn React from scratch, work with new tech such as Axios, and successfully use the Assembly API to create the Class Pad (something I am passionate about). I was able to persevere through errors and build a working product that is impactful. This is my first hackathon and I am glad I had so much fun.
*Simi:* This was my first time using React, Node.js, and Visual Studio. I don't have a lot of CS experience so the learning curve was steep but rewarding!
*Amitesh:* Got to work with a team to bring a complicated idea to life!
# What we learned
*Amitesh:* Troubleshooting domain creation for various pages, supporting teammates and teaching concepts
*Pranati:* I learned how to use new tech such as React, new concepts such API calls using Axios, how to debug efficiently, and how to work and collaborate in a team
*Simi:* I learned how APIs work, basic html, and how React modularizes code. Also learned the value of hackathons as this was my first
# What's next for Balance Pad
*Visualizing Music:* Our group hopes to integrate BeatCaps software to our page in the future. This would allow a more interactive music experience for users and also allow hearing impaired individuals to experience music
*Real Time Transcription:* Our group hopes to implement in real time transcription in the Class Pad to make it even easier for students. | losing |
## Inspiration
How many times have you opened your fridge door and examined its contents for something to eat/cook/stare at and ended up finding a forgotten container of food in the back of the fridge (a month past its expiry date) instead? Are you brave enough to eat it, or does it immediately go into the trash?
The most likely answer would be to dispose of it right away for health and safety reasons, but you'd be surprised - food wastage is a problem that [many countries such as Canada](https://seeds.ca/schoolfoodgardens/food-waste-in-canada-3/) contend with every year, even as world hunger continues to increase! Big corporations and industries contribute to most of the wastage that occurs worldwide, but we as individual consumers can do our part to reduce food wastage as well by minding our fridges and pantries and making sure we eat everything that we procure for ourselves.
Enter chec:xpire - the little app that helps reduce food wastage, one ingredient at a time!
## What it does
chec:xpire takes stock of the contents of your fridge and informs you which food items are close to their best-before date. chec:xpire also provides a suggested recipe which makes use of the expiring ingredients, allowing you to use the ingredients in your next meal without letting them go to waste due to spoilage!
## How we built it
We built the backend using Express.js, which laid the groundwork for interfacing with Solace, an event broker. The backend tracks food items (in a hypothetical refrigerator) as well as their expiry dates, and picks out those that are two days away from their best-before date so that the user knows to consume them soon. The app also makes use of the co:here AI to retrieve and return recipes that make use of the expiring ingredients, thus providing a convenient way to use up the expiring food items without having to figure out what to do with them in the next couple days.
The frontend is a simple Node.js app that subscribes to "events" (in this case, food approaching their expiry date) through Solace, which sends the message to the frontend app once the two-day mark before the expiry date is reached. A notification is sent to the user detailing which ingredients (and their quantities) are expiring soon, along with a recipe that uses the ingredients up.
## Challenges we ran into
The scope of our project was a little too big for our current skillset; we ran into a few problems finding ways to implement the features that we wanted to include in the project, so we had to find ways to accomplish what we wanted to do using other methods.
## Accomplishments that we're proud of
All but one member of the team are first-time hackathon participants - we're very proud of the fact that we managed to create a working program that did what we wanted it to, despite the hurdles we came across while trying to figure out what frameworks we wanted to use for the project!
## What we learned
* planning out a project that's meant to be completed within 36 hours is difficult, especially if you've never done it before!
* there were some compromises that needed to be made due to a combination of framework-related hiccups and the limitations of our current skillsets, but there's victory to be had in seeing a project through to the end even if we weren't able to accomplish every single little thing we wanted to
* Red Bull gives you wings past midnight, apparently
## What's next for chec:xpire
A fully implemented frontend would be great - we ran out of time! | ## Inspiration
The both of us study in NYC and take subways almost everyday, and we notice the rampant food insecurity and poverty in an urban area. In 2017 40 million people struggled with hunger (source Feeding America) yet food waste levels remain at an all time high (“50% of all produce in the United States is thrown away” source The Guardian). We wanted to tackle this problem, because it affects a huge population, and we see these effects in and around the city.
## What it does
Our webapp uses machine learning to detect produce and labels of packaged foods. The webapp collects this data, and stores it into a user's ingredients list. Recipes are automatically found using google search API from the ingredients list. Our code parses through the list of ingredients and generates the recipe that would maximize the amount of food items (also based on spoilage).The user may also upload their receipt or grocery list to the webapp. With these features, the goal of our product is to reduce food waste by maximizing the ingredients a user has at home. With our trained datasets that detect varying levels of spoiled produce, a user is able to make more informed choices based on the webapp's recommendation.
## How we built it
We first tried to detect images of different types of food using various platforms like open-cv and AWS. After we had this detection working, we used Flask to display the data onto a webapp. Once the information was stored on the webapp, we automatically generated recipes based on the list of ingredients. Then, we built the front-end (HTML5, CSS3) including UX/UI design into the implementation. We shifted our focus to the back-end, and we decided to detect text from receipts, grocery lists, and labels (packaged foods) that we also displayed onto our webapp. On the webapp we also included an faq page to educate our users on this epidemic. On the webapp we also posted a case study on the product in terms of UX and UI design.
## Challenges we ran
We first used open-cv for image recognition, but we learned about amazon web services, specifically, Amazon Rekognition to identify text and objects to detect expiration dates, labels, produce, and grocery lists. We trained models in sci-kit python to detect levels of spoilage/rotten produce. We encountered merge conflicts with GitHub, so we had to troubleshoot with the terminal in order to resolve them. We were new to using Flask, which we used to connect our python files to display in a webpage. We also had to choose certain features over others that would best fit the needs of the users. This was also our first hackathon ever!
## Accomplishments that we're proud of
We feel proud to have learned new tools in different areas of technology (computer vision, machine learning, different languages) in a short period of time. We also made use of the mentor room early on, which was helpful. We learned different methods to implement similar ideas, and we were able to choose the most efficient one (example: AWS was more efficient for us than open-cv). We also used different functions in order to not repeat lines of code.
## What we learned
New technologies and different ways of implementing them. We both had no experience in ML and computer vision prior to this hackathon. We learned how to divide an engineering project into smaller tasks that we could complete. We managed our time well, so we could choose workshops to attend, but also focus on our project, and get rest.
## What's next for ZeroWaste
In a later version, ZeroWaste would store and analyze the user's history of food items, and recommend recipes (which max out the ingredients that are about to expire using computer vision) as well as other nutritional items similar to what the user consistently eats through ML. In order to tackle food insecurity at colleges and schools ZeroWaste would detect when fresh produce would expire, and predict when an item may expire based on climate/geographic region of community. We had hardware (raspberry PI), which we could have used with a software ML method, so in the future we would want to test the accuracy of our code with the hardware. | ## Inspiration
With billions of tons of food waste occurring in Canada every year. We knew that there needs to exist a cost-effective way to reduce food waste that can empower restaurant owners to make more eco-conscious decisions while also incentivizing consumers to choose more environmentally-friendly food options.
## What it does
Re-fresh is a two-pronged system that allows users to search for food from restaurants that would otherwise go to waste at a lower price than normal. On the restaurant side, we provide a platform to track and analyze inventory in a way that allows restaurants to better manage their requisitions for produce so that they do not generate any extra waste and they can ensure profits are not being thrown away.
## How we built it
For the backend portion of the app, we utilized cockroachDB in python and javascript as well as React Native for the user mobile app and the enterprise web application. To ensure maximum protection of our user data, we used SHA256 encryption to encrypt sensitive user information such as usernames and password.
## Challenges we ran into
Due to the lack of adequate documentation as well as a plethora of integration issues with react.js and node, cockroachDB was a difficult framework to work with. Other issues we ran into were some problems on the frontend with utilizing chart.js for displaying graphical representations of enterprise data.
## Accomplishments that we're proud of
We are proud of the end design of our mobile app and web application. Our team are not native web developers so it was a unique experience stepping out of our comfort zone and getting to try new frameworks and overall we are happy with what we learned as well as how we were able to utilize our brand understanding of programming principles to create this project.
## What we learned
We learned more about web development than what we knew before. We also learned that despite the design-oriented nature of frontend development there are many technical hurdles to go through when creating a full stack application and that there is a wide array of different frameworks and APIs that are useful in developing web applications.
## What's next for Re-Fresh
The next step for Re-Fresh is restructuring the backend architecture to allow ease of scalability for future development as well as hopefully being able to publish it and attract a customer-base. | losing |
View the SlideDeck for this project at: [slides](https://docs.google.com/presentation/d/1G1M9v0Vk2-tAhulnirHIsoivKq3WK7E2tx3RZW12Zas/edit?usp=sharing)
## Inspiration / Why
It is no surprise that mental health has been a prevailing issue in modern society. 16.2 million adults in the US and 300 million people in the world have depression according to the World Health Organization. Nearly 50 percent of all people diagnosed with depression are also diagnosed with anxiety. Furthermore, anxiety and depression rates are a rising issue among the teenage and adolescent population. About 20 percent of all teens experience depression before they reach adulthood, and only 30 percent of depressed teens are being treated for it.
To help battle for mental well-being within this space, we created DearAI. Since many teenagers do not actively seek out support for potential mental health issues (either due to financial or personal reasons), we want to find a way to inform teens about their emotions using machine learning and NLP and recommend to them activities designed to improve their well-being.
## Our Product:
To help us achieve this goal, we wanted to create an app that integrated journaling, a great way for users to input and track their emotions over time. Journaling has been shown to reduce stress, improve immune function, boost mood, and strengthen emotional functions. Journaling apps already exist, however, our app performs sentiment analysis on the user entries to help users be aware of and keep track of their emotions over time.
Furthermore, every time a user inputs an entry, we want to recommend the user something that will lighten up their day if they are having a bad day, or something that will keep their day strong if they are having a good day. As a result, if the natural language processing results return a negative sentiment like fear or sadness, we will recommend a variety of prescriptions from meditation, which has shown to decrease anxiety and depression, to cat videos on Youtube. We currently also recommend dining options and can expand these recommendations to other activities such as outdoors activities (i.e. hiking, climbing) or movies.
**We want to improve the mental well-being and lifestyle of our users through machine learning and journaling.This is why we created DearAI.**
## Implementation / How
Research has found that ML/AI can detect the emotions of a user better than the user themself can. As a result, we leveraged the power of IBM Watson’s NLP algorithms to extract the sentiments within a user’s textual journal entries. With the user’s emotions now quantified, DearAI then makes recommendations to either improve or strengthen the user’s current state of mind. The program makes a series of requests to various API endpoints, and we explored many APIs including Yelp, Spotify, OMDb, and Youtube. Their databases have been integrated and this has allowed us to curate the content of the recommendation based on the user’s specific emotion, because not all forms of entertainment are relevant to all emotions.
For example, the detection of sadness could result in recommendations ranging from guided meditation to comedy. Each journal entry is also saved so that users can monitor the development of their emotions over time.
## Future
There are a considerable amount of features that we did not have the opportunity to implement that we believe would have improved the app experience. In the future, we would like to include video and audio recording so that the user can feel more natural speaking their thoughts and also so that we can use computer vision analysis on the video to help us more accurately determine users’ emotions. Also, we would like to integrate a recommendation system via reinforcement learning by having the user input whether our recommendations improved their mood or not, so that we can more accurately prescribe recommendations as well. Lastly, we can also expand the APIs we use to allow for more recommendations. | ## Inspiration
We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?**
## What it does
**Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners.
## How we built it
We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima.
## Challenges we ran into
We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization.
We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene.
## Accomplishments that we're proud of
We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for.
## What we learned
Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change.
## What's next for Remy
While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness.
Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button.
To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery. | ## Inspiration
Mental health is a serious issue but also one hard to diagnose and address. In these times of our virtual environments and uncertainties, people tend to face more difficulty and strain on their mental health. We aim to help individuals become more aware and hopefully, identify signs of mental health issues.
We realized that not too many people look after their mental health until it's far too late. As such, we decided to implement a solution that motivated people to write down their thoughts so that they could track their mood over time. Other mental health tracking apps exist on the market right now, but none utilize NLP to take advantage of the actual writing part of journalling. That is how our idea of Mood was born!
## What it does
Mood is a journaling app. It allows the user to make quick daily entries about how they feel and what significant events occurred. Using this text entry, the app performs sentiment analysis to rate the current mood of the user and tracks it over time. This enables the app to identify trends in the user's mental health daily, weekly, monthly, and overall. It also calculates the most frequently occurring emotional keywords and general keywords throughout a period, on top of the top positive and negative keywords that make the user smile and cry the most.
## How we built it
We built the interface of Mood using Flutter. Our team decided that for a journaling application, it would be more accessible if it was available on mobile. The sentiment analysis is done using Python and its NLP libraries like textblob, spacy, nltk, and transformers. Since the backend is written in Python, we decided to use Flask to manage our back-end server. These are connected with a REST API.
## Challenges we ran into
We all had to learn a new technology for this project and was definitely one of the biggest challenges that we faced. Another big challenge that we ran into is creating APIs to connect our front-end to the back-end.
## Accomplishments that we're proud of
Our team has a lot to be proud of. We all learned something new and got something to take away from this Hackathon. Even though we stayed up really late, we are happy that we have something to show to the judges.
## What we learned
One of the biggest lessons that we learned from this experience is how important the initial development plan is. Running into problems later on caused our team to spend more time to find a workaround to problems that in hindsight could have been better planned for.
## What's next for Mood
Mood has lots of room for future improvement. Features such as user authentication and cloud storage would be great additions to improve the security and protect the privacy of our users. Also, notification systems on long-term and short-term abnormal trends (bipolar, depression,...) of mood would be meaningful for users to understand their emotional behaviour. Other improvements would include a more accurate and advanced model of sentimental analysis and improving accessibility of the interface (i.e. voice or image input).
An online system that learns the users’ mood patterns and responds better to changes. | winning |
## Inspiration
Kimoyo is named after the kimoyo beads in Black Panther-- they're beads that allow you to start a 3D video call right in the palm of your hand. Hologram communication, or "holoportation" as we put it, is not a new idea in movies. Similar scenes occur in Star Wars and in Kingsman, for example. However, holoportation is certainly an up-and-coming idea in the real world!
## What it does
In the completed version of Kimoyo, users will be able to use an HTC Vive to view the avatars of others in a video call, while simultaneously animating their own avatar through inverse kinematics (IK). Currently, Kimoyo has a prototype IK system working, and has a sample avatar and sample environment to experience!
## How I built it
Starting this project with only a basic knowledge of Unity and with no other VR experience (I wasn't even sure what HTC Vive was!), I leaned on mentors, friends, and many YouTube tutorials to learn enough about Vive to put together a working model. So far, Kimoyo has been done almost entirely in Unity using SteamVR, VRTK, and MakeHuman assets.
## Challenges I ran into
My lack of experience was a limiting factor, and I feel that I had to spend quite a bit of time watching tutorials, debugging, and trying to solve very simple problems. That being said, the resources available saved me a lot of time, and I feel that I was able to learn enough to put together a good project in the time available. The actual planning of the project-- deciding which hardware to use and reasoning through design problems-- was also challenging, but very rewarding as well.
## Accomplishments that I'm proud of
I definitely could not have built Kimoyo alone, and I'm really glad and very thankful that I was able to learn so much from the resources all around me. There have been bugs and issues and problems that seemed absolutely intractable, but I was able to keep going with the help of others around me!
## What's next for Kimoyo
The next steps for Kimoyo is to get a complete, working version up. First, we plan to expand the hand inverse kinematics so the full upper body moves naturally. We also plan to add additional camera perspectives and settings, integrate sound, beginning work with a Unity network manager to allow multiple people to join an environment, and of course building and deploying an app.
After that? Future steps might include writing interfaces for creation of custom environments (including AR?), and custom avatars, as well as developing a UI involving the Vive controllers-- Kimoyo has so many possibilities! | # 💡 Inspiration
Meeting new people is an excellent way to broaden your horizons and discover different cuisines. Dining with others is a wonderful opportunity to build connections and form new friendships. In fact, eating alone is one of the primary causes of unhappiness, second only to mental illness and financial problems. Therefore, it is essential to make an effort to find someone to share meals with. By trying new cuisines with new people and exploring new neighbourhoods, you can make new connections while enjoying delicious food.
# ❓ What it does
PlateMate is a unique networking platform that connects individuals in close proximity and provides the setup of an impromptu meeting over some great food! It enables individuals to explore new cuisines and new individuals by using Cohere to process human-written text and discern an individual’s preferences, interests, and other attributes. This data is then aggregated to optimize a matching algorithm that pairs users. Along with a matchmaking feature, PlateMate utilizes Google APIs to highlight nearby restaurant options that fit into users’ budgets. The app’s recommendations consider a user’s budget to help regulate spending habits and make managing finances easier. PlateMate takes into account many factors to ensure that users have an enjoyable and reliable experience on the platform.
# 🚀 Exploration
PlateMate provides opportunities for exploration by expanding social circles with interesting individuals with different life experiences and backgrounds. You are matched to other nearby users with similar cuisine preferences but differing interests. Restaurant suggestions are also provided based on your characteristics and your match’s characteristics. This provides invaluable opportunities to explore new cultures and identities. As the world emerges from years of lockdown and the COVID-19 pandemic, it is more important than ever to find ways to reconnect with others and explore different perspectives.
# 🧰 How we built it
**React, Tailwind CSS, Figma**: The client side of our web app was built using React and styled with Tailwind CSS based on a high-fidelity mockup created on Figma.
**Express.js**: The backend server was made using Express.js and managed routes that allowed our frontend to call third-party APIs and obtain results from Cohere’s generative models.
**Cohere**: User-specific keywords were extracted from brief user bios using Cohere’s generative LLMs. Additionally, after two users were matched, Cohere was used to generate a brief justification of why the two users would be a good match and provide opportunities for exploration.
**Google Maps Platform APIs**: The Google Maps API was used to display a live and dynamic map on the homepage and provide autocomplete search suggestions. The Google Places API obtained lists of nearby restaurants, as well as specific information about restaurants that users were matched to.
**Firebase**: User data for both authentication and matching purposes, such as preferred cuisines and interests, were stored in a Cloud Firestore database.
# 🤔 Challenges we ran into
* Obtaining desired output and formatting from Cohere with longer and more complicated prompts
* Lack of current and updated libraries for the Google Maps API
* Creating functioning Express.js routes that connected to our React client
* Maintaining a cohesive and productive team environment when sleep deprived
# 🏆 Accomplishments that we're proud of
* This was the first hackathon for two of our team members
* Creating a fully-functioning full-stack web app with several new technologies we had never touched before, including Cohere and Google Maps Platform APIs
* Extracting keywords and generating JSON objects with a high degree of accuracy using Cohere
# 🧠 What we learned
* Prompt engineering, keyword extraction, and text generation in Cohere
* Server and route management in Express.js
* Design and UI development with Tailwind CSS
* Dynamic map display and search autocomplete with Google Maps Platform APIs
* UI/UX design in Figma
* REST API calls
# 👉 What's next for PlateMate
* Provide restaurant suggestions that are better tailored to users’ budgets by using Plaid’s financial APIs to accurately determine their average spending
* Connect users directly through an in-app chat function
* Friends and network system
* Improved matching algorithm | ## Our Inspiration
We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts.
## What it does
EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable.
## How we built it
We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space.
## Challenges we ran into
Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR!
## Accomplishments that we're proud of
We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life. | partial |
## Inspiration
We saw the sad reality that people often attending hackathons don't exercise regularly or do so while coding. We decided to come up with a solution
## What it does
Lets a user log in, and watch short fitness videos of exercises they can do while attending a hackathon
## How we built it
We used HTML & CSS for the frontend, python & sqlite3 for the backend and django to merge all three. We also deployed a DCL worker
## Challenges we ran into
Learning django and authenticating users in a short span of time
## Accomplishments that we're proud of
Getting a functioning webapp up in a short time
## What we learned
How do design a website, how to deploy a website, simple HTML, python objects, django header tags
## What's next for ActiveHawk
We want to make activehawk the Tiktok of hackathon fitness. we plan on adding more functionality for apps as well as a live chat room for instructors using the Twello api | ## Inspiration
Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate.
## What it does
We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal.
## How we built it
Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript
Backend: Python,Javascript
Server side> Nodejs, Passport js
Database> MongoDB( for user login), MySQL(for mood based music recommendations)
## Challenges we ran into
Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked .
But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally.
## Accomplishments that we're proud of
Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions.
We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body
We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor.
Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging.
## What we learned
We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists.
## What's next for Umang
While the core functionality of our app is complete, it can of course be further improved .
1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress.
2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement.
This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit | ## Inspiration
Being a student isn't easy. There are so many things that you have to be on top of, but it can't be difficult to organize your study sessions effectively. We were inspired to create Breaks Without Barriers to provide students with a mobile companion to aid them in their study sessions.
## What it does
Our app has several features that can help a student plan their studying accordingly, such as a customizable to-do list as well as a built-in study and break timer. It also holds students accountable for maintaining their mental and physical well-being by periodically sending them notifications -- reminding them to drink some water, check their posture, and the like. In the hustle and bustle of schoolwork, students often forget to take care of themselves!
We also incorporated some fun elements such as a level-up feature that tracks your progress and shows where you stand in comparison to other students. Plus, the more you study, the more app themes you unlock! You are also given some personalized Spotify playlist recommendations if you prefer studying with some background music.
Students can connect with each other, by viewing each other's profiles and exchanging their contact information. This allows students to network with other students, creating an online community of like-minded individuals.
## How we built it
We split our group into sub-teams: one for front-end development and another for back end-development. The front-end team designed the proposed app user-interface using Figma, and the back-end team created functional software using Python, Tkinter, SpotifyAPI, and twilio technologies.
The main framework of our project is built with Tkinter and it is composed of 3 programs that interact with each other: backend.py, frontend.py and login.py. Frontend.py consists of the main GUI, while backend.py is called when external functions are needed. Login.py is a separate file that creates a login window to verify the user.
## Challenges we ran into
This was actually the first hackathon for all four of us! This was a new experience for us, and we had to figure out how to navigate the entire process. Most of us had limited coding knowledge, and we had to learn new softwares while concurrently developing one of our own. We also ran into issues with time -- given a period of 36 hours to create an entire project, we had troubles in spreading out our work time effectively.
## Accomplishments that we're proud of
We're proud of creating an idea that was oriented toward students in helping them navigate their trials. We're also just proud of successfully completing our very first hackathon!
## What's next for Breaks Without Barriers
In the future we plan to implement more features that will connect individuals in a much more efficient way, including moderated study sessions, filtered search and an AI that will provide users with studying information. We also want to further develop the spotify function and allow for music to be played directly through the api. | partial |
## Inspiration
We set out to build a product that solves two core pain points in our daily lives: 1) figuring out what to do for every meal 😋 and 2) maintaining personal relationships 👥.
As college students, we find ourselves on a daily basis asking the question, “What should I do for lunch today?” 🍔 — many times with a little less than an hour left before it’s time to eat. The decision process usually involves determining if one has the willpower to cook at home, and if not, figuring out where to eat out and if there is anyone to eat out with. For us, this usually just ends up being our roommates, and we find ourselves quite challenged by maintaining depth of relationships with people we want to because the context windows are too large to juggle.
Enter, BiteBuddy.
## What it does
We divide the problem we’re solving into two main scenarios.
1. **Spontaneous (Eat Now!)**: It’s 12PM and Jason realizes that he doesn’t have lunch plans. BiteBuddy will help him make some! 🍱
2. **Futuristic (Schedule Ahead!)**: It’s Friday night and Parth decides that he wants to plan out his entire next week (Forkable, anyone?). 🕒
**Eat Now** allows you to find friends that are near you and automatically suggests nearby restaurants that would be amenable to both of you based on dietary and financial considerations. Read more below to learn some of the cool API interactions and ML behind this :’). 🗺️
**Schedule Ahead** allows you to plan your week ahead and actually think about personal relationships. It analyzes closeness between friends, how long it’s been since you last hung out, looks at calendars, and similar to above automatically suggests time and restaurants. Read more below for how! 🧠
We also offer a variety of other features to support the core experience:
1. **Feed**. View a streaming feed of the places your friends have been going. Enhance the social aspect of the network.
2. **Friends** (no, we don’t offer friends). Manage your relationships in a centralized way and view LLM-generated insights regarding relationships and when might be the right time/how to rekindle them.
## How we built it
The entire stack we used for this project was Python, with the full stack web development being enabled by the **Reflex** Python package, and database being Firebase.
**Eat Now** is a feature that bases itself around geolocation, dietary preferences, financial preferences, calendar availability, and LLM recommendation systems. We take your location, go through your friends list and find the friends who are near you and don’t have immediate conflicts on their calendar, compute an intersection of possible restaurants via the Yelp API that would be within a certain radius of both of you, filter this intersection with dietary + financial preferences (vegetarian? vegan? cheap?), then pass all our user context into a LLAMA-13B-Chat 💬 to generate a final recommendation. This recommendation surfaces itself as a potential invite (in figures above) that the user can choose whether or not to send to another person. If they accept, a calendar invite is automatically generated.
**Schedule Ahead** is a feature that bases itself around graph machine learning, calendar availability, personal relationship status (how close are y’all? When is the last time you saw each other?), dietary/financial preferences, and more. By looking ahead into the future, we take the time to look through our social network graph with associated metadata and infer relationships via Spectral Clustering 📊. Based on how long it’s been since you last hung out and the strength of your relationship, it will surface who to meet with as a priority queue and look at both calendars to determine mutually available times and locations with the same LLM.
We use retrieval augmented generation (RAG) 📝 throughout our app to power personalized friend insights (to learn more about which friends you should catch up with, learn that Jason is a foodie, and what cuisines you and Parth like). This method is also a part of our recommendation algorithm.
## Challenges we ran into
1. **Dealing with APIs.** We utilized a number of APIs to provide a level of granularity and practicality to this project, rather than something that’s solely a mockup. Dealing with APIs though comes with its own issues. The Yelp API, for example, continuously rate limited us even though we cycled through keys from all of our developer accounts :’). The Google Calendar API required a lot of exploration with refresh tokens, necessary scopes, managing state with google auth, etc.
2. **New Technologies.** We challenged ourselves by exploring some new technologies as a part of our stack to complete this project. Graph ML for example was a technology we hadn’t worked with much before, and we quickly ran into the cold start problem with meaningless graphs and unintuitive relationships. Reflex was another new technology that we used to complete our frontend and backend entirely in Python. None of us had ever even pip installed this package before, so learning how to work with it and then turn it into something complex and useful was a fun challenge. 💡
3. **Latency.** Because our app queries several APIs, we had to make our code as performant as possible, utilize concurrency where possible, and add caching for frequently-queried endpoints. 🖥️
## Accomplishments that we're proud of
The amount of complexity that we were able to introduce into this project made it mimic real-life as close as possible, which is something we’re very proud of. We’re also proud of all the new technologies and Machine Learning methods we were able to use to develop a product that would be most beneficial to end users.
## What we learned
This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves.
## What's next for BiteBuddy
The cool thing about this project was that there were a hundred more features we wanted to include but didn’t remotely have the time to implement. Here are some of our favorites 🙂:
1. **Groups.** Social circles often revolve around groups. Enabling the formation of groups on the app would give us more metadata information regarding the relationships between people, lending itself to improved GNN algorithms and recommendations, and improve the stickiness of the product by introducing network effects.
2. **New Intros: Extending to the Mutuals.** We’ve built a wonderful graph of relationships that includes metadata not super common to a social network. Why not leverage this to generate introductions and form new relationships between people?
3. **More Integrations.** Why use DonutBot when you can have BiteBuddy?
## Built with
Python, Reflex, Firebase, Together AI, ❤️, and boba 🧋 | ## Inspiration
For many college students, finding time to socialize and make new friends is hard. Everyone's schedule seems perpetually busy, and arranging a dinner chat with someone you know can be a hard and unrewarding task. At the same time, however, having dinner alone is definitely not a rare thing. We've probably all had the experience of having social energy on a particular day, but it's too late to put anything on the calendar. Our SMS dinner matching project exactly aims to **address the missed socializing opportunities in impromptu same-day dinner arrangements**. Starting from a basic dining-hall dinner matching tool for Penn students only, we **envision an event-centered, multi-channel social platform** that would make organizing events among friend groups, hobby groups, and nearby strangers effortless and sustainable in the long term for its users.
## What it does
Our current MVP, built entirely within the timeframe of this hackathon, allows users to interact with our web server via **SMS text messages** and get **matched to other users for dinner** on the same day based on dining preferences and time availabilities.
### The user journey:
1. User texts anything to our SMS number
2. Server responds with a welcome message and lists out Penn's 5 dining halls for the user to choose from
3. The user texts a list of numbers corresponding to the dining halls the user wants to have dinner at
4. The server sends the user input parsing result to the user and then asks the user to choose between 7 time slots (every 30 minutes between 5:00 pm and 8:00 pm) to meet with their dinner buddy
5. The user texts a list of numbers corresponding to the available time slots
6. The server attempts to match the user with another user. If no match is currently found, the server sends a text to the user confirming that matching is ongoing. If a match is found, the server sends the matched dinner time and location, as well as the phone number of the matched user, to each of the two users matched
7. The user can either choose to confirm or decline the match
8. If the user confirms the match, the server sends the user a confirmation message; and if the other user hasn't confirmed, notifies the other user that their buddy has already confirmed the match
9. If both users in the match confirm, the server sends a final dinner arrangement confirmed message to both users
10. If a user decides to decline, a message will be sent to the other user that the server is working on making a different match
11. 30 minutes before the arranged time, the server sends each user a reminder
###Other notable backend features
12. The server conducts user input validation for each user text to the server; if the user input is invalid, it sends an error message to the user asking the user to enter again
13. The database maintains all requests and dinner matches made on that day; at 12:00 am each day, the server moves all requests and matches to a separate archive database
## How we built it
We used the Node.js framework and built an Express.js web server connected to a hosted MongoDB instance via Mongoose.
We used Twilio Node.js SDK to send and receive SMS text messages.
We used Cron for time-based tasks.
Our notable abstracted functionality modules include routes and the main web app to handle SMS webhook, a session manager that contains our main business logic, a send module that constructs text messages to send to users, time-based task modules, and MongoDB schema modules.
## Challenges we ran into
Writing and debugging async functions poses an additional challenge. Keeping track of potentially concurrent interaction with multiple users also required additional design work.
## Accomplishments that we're proud of
Our main design principle for this project is to keep the application **simple and accessible**. Compared with other common approaches that require users to download an app and register before they can start using the service, using our tool requires **minimal effort**. The user can easily start using our tool even on a busy day.
In terms of architecture, we built a **well-abstracted modern web application** that can be easily modified for new features and can become **highly scalable** without significant additional effort (by converting the current web server into the AWS Lambda framework).
## What we learned
1. How to use asynchronous functions to build a server - multi-client web application
2. How to use posts and webhooks to send and receive information
3. How to build a MongoDB-backed web application via Mongoose
4. How to use Cron to automate time-sensitive workflows
## What's next for SMS dinner matching
### Short-term feature expansion plan
1. Expand location options to all UCity restaurants by enabling users to search locations by name
2. Build a light-weight mobile app that operates in parallel with the SMS service as the basis to expand with more features
3. Implement friend group features to allow making dinner arrangements with friends
###Architecture optimization
4. Convert to AWS Lamdba serverless framework to ensure application scalability and reduce hosting cost
5. Use MongoDB indexes and additional data structures to optimize Cron workflow and reduce the number of times we need to run time-based queries
### Long-term vision
6. Expand to general event-making beyond just making dinner arrangements
7. Create explore (even list) functionality and event feed based on user profile
8. Expand to the general population beyond Penn students; event matching will be based on the users' preferences, location, and friend groups | ## Inspiration
Whenever friends and I want to hang out, we always ask about our schedule. Trying to find a free spot could take a long time when I am with a group of friends. I wanted to create an app that simplifies that process.
## What it does
The website will ask the duration of the event and ask to fill out each person's schedule. The app will quickly find the best period for everyone.
## How we built it
We built it using HTML/CSS and Javascript. We also used a calendar plugin.
## Challenges we ran into
The integration of the calendar plugin is hard and the algorithm that finds the best time slot is also hard to implement.
## Accomplishments that we're proud of
Although the site is not finished, we have the overall website structure done.
## What we learned
It was hard to work on this project in a team of 2.
## What's next for MeetUp
We will try to finish this project after pennapps. | partial |
## Inspiration
Every day, about 37 people in the United States die in drunk-driving crashes — that's one person every 39 minutes. In 2021, 13,384 people died in alcohol-impaired driving traffic deaths — a 14% increase from 2020. These deaths were all preventable. [1]
In 2019 alone, there were 15,000 injuries and 186 fatalities attributed to street racing. [4]
**Daily**: Every day, on average, 4 Canadians are killed and 175 are injured in impairment-related crashes. **Annually**: We estimate between 1,250 and 1,500 people are killed and more than 63,000 are injured each year in Canada in impairment-related crashes. [2]
In 2020, the total estimated annual cost of fatalities from accidents involving drivers impaired by alcohol was approximately $123.3 billion. This figure encompasses both medical expenses and the estimated costs associated with the loss of life. Of those killed in such accidents, 62% were the impaired drivers themselves, while 38% included passengers in the impaired drivers' vehicles, occupants of other vehicles, or non-occupants like pedestrians. Additionally, 229 children aged 0–14 years lost their lives in accidents involving alcohol-impaired drivers, accounting for 21% of all traffic-related fatalities within this age group for that year. [3]
And these are just the statistics from the US and Canada alone.
The prevention and reactionary methods for impaired driving and street racing just aren't cutting it. Along with the impact on families and loved ones, the financial burden on the economy, and the increasing risk on law enforcement, we decided to create a solution that aims to tackle the problem when it appears.
## What it does
From municipal and law enforcement cameras, we’re able to gather live footage or recordings and determine driving patterns based on the linearity of the movement of cars. Through this, a computer-vision model can determine if someone is swerving too much, to alert law enforcement. This solution enables law enforcement to review the footage as alerts are sent to a heads-up display (HUD) which is our front-end, in real time, enabling them to quickly determine a course of action.
If you’re curious or want to learn more about our project, check out the [GitHub](https://github.com/michael-han-dev/QHacks2024) and follow the instructions to upload your own video and try it out! Some of us will also be continuing to update the project and take it our own direction so follow along!
## How we built it
We began with an idea: we wanted to use publicly accessible data to detect impaired drivers with the goal of keeping them off the road. We mapped out the requirements for our model and started looking into methods to detect cars, settling on [YOLOv8](https://github.com/ultralytics/ultralytics), a series of lightweight computer vision models with tracking capabilities. We initialized one model, `yolov8n`, and used OpenCV to run through each frame of an uploaded video. For each frame, we ran inference to obtain bounding boxes for images on the screen. Next, we looked at ways to track the motion of detected cars; luckily, object tracking only required a few small changes to initialize detection history and point generation. We stored the “movement” of each detected car in a dictionary for later use. At this point, we made the observation that on any straight road (such as a highway), a higher-than-street-level view of the cars would yield linear paths for straight-moving objects. Hence, to gauge the linearity of each car, we settled on a linear regression that calculates the mean-squared error. Above a certain threshold established through our research and testing, the error would indicate anomalous/impaired driving (i.e. a non-linear, swerving path).
For our user interface, we used React as our front-end to display all the different cameras that would be using DIONYSUS. Each display when clicked, would expand and provide a larger view of the camera angle. Next to the video feed is the linear regression plot with analysis describing potential unsafe drivers. For example, white car1 (which would be labeled on the video) has a mean-squared error 1000, potential unsafe driving detected.
## Challenges we ran into
One challenge we ran into was the lack of footage of unsafe driving from an aerial perspective. While there were many examples of dashcam footage, the data extracted from them yielded poor results as the images yielded viewing angles of only one dimension, or had a moving reference frame. This led to the use of highway cameras and helicopter shots as our primary data sources to allow for two-dimension imaging. These video clips were uncommon and had other noise factors such as camera shake or minor cases of unsafe driving. To generate the data of unsafe driving, we turned to the game Rocket League to simulate swerving and applied the footage to our existing model which was able to detect unsafe driving.
Another challenge was figuring out which statistical analysis to apply to determine unsafe driving. The team went with a regression model analysis where plotted points from the cars would be compared to a line of best fit. The mean squared value was used as a metric to determine how much of the data was not on the line of best fit. While the regression model fit the scope of the project, with more time, a more thorough model such as the Fast-Fourier Transform would be applied as a general noise-detection model.
## Accomplishments that we're proud of
* For 3 members of our team, this was their first hackathon! It was a great learning experience to implement classroom learnings and previous project experiences into a fast-paced project with real world application such as this one!
* Editing the model and customizing to our own liking was quite difficult but allows us to serve a purpose impacting so many.
* Instead of just using javascript, we challenged ourselves to incorporate a full-stack application using react instead and it worked out pretty well, but more importantly, we learned a ton.
* We did a lot of research to back up our initial hypothesis to build a product with a wide reaching audience and global impact
## Additional Information
Some additional information which influenced our decision in creating this project backed by statistics.
# Dangers of Drunk Driving:
* Every day, about 37 people in the United States die in drunk-driving crashes — that's one person every 39 minutes. In 2021, 13,384 people died in alcohol-impaired driving traffic deaths — a 14% increase from 2020. These deaths were all preventable. [1]
* Daily: Every day, on average, 4 Canadians are killed and 175 are injured in impairment-related crashes.
* Annually: We estimate between 1,250 and 1,500 people are killed and more than 63,000 are injured each year in Canada in impairment-related crashes. [2]
* In 2020, the total estimated annual cost of fatalities from accidents involving drivers impaired by alcohol was approximately $123.3 billion. This figure encompasses both medical expenses and the estimated costs associated with the loss of life. Of those killed in such accidents, 62% were the impaired drivers themselves, while 38% included passengers in the impaired drivers' vehicles, occupants of other vehicles, or non-occupants like pedestrians. Additionally, 229 children aged 0–14 years lost their lives in accidents involving alcohol-impaired drivers, accounting for 21% of all traffic-related fatalities within this age group for that year. [3]
* The number of fatalities involving drivers impaired by drugs other than alcohol each year remains uncertain due to limitations in data collection. Nonetheless, certain studies have evaluated the presence of alcohol and drugs in drivers involved in serious accidents. For instance, a study conducted across seven trauma centers on 4,243 drivers who sustained severe injuries in crashes between September 2019 and July 2021 revealed that 54% tested positive for alcohol and/or drugs.
* Among these, 22% were found to have alcohol in their system, 25% tested positive for marijuana, 9% for opioids, 10% for stimulants, and 8% for sedatives. [3]
* About 1 million arrests are made in the United States each year for driving under the influence of alcohol and/or drugs.13,14 However, results from national self-report surveys show that these arrests represent only a small portion of the times impaired drivers are on the road.
-The 2020 National Survey on Drug Use and Health (NSDUH) revealed that among U.S. residents aged 16 and older, the following numbers reported driving under the influence over the past year:
* 18.5 million for alcohol, which represents 7.2% of the age group.
* 11.7 million for marijuana, accounting for 4.5% of the age group.
* 2.4 million for other illicit drugs, making up 0.9% of the age group.
* Additionally, the Behavioral Risk Factor Surveillance System reported that in 2020, 1.2% of adults admitted to driving when they had consumed too much alcohol within the last 30 days. This behavior led to an estimated 127 million instances of alcohol-impaired driving among U.S. adults. [3]
* In 2019 alone, there were 15,000 injuries and 186 fatalities attributed to street racing. Here are some additional statistics related to street racing in the United States [4]:
* The average age of street racing participants is between 18 and 24 years old [4].
* 49% of high-collision accidents occur on urban roads.
* 31% of all street racing accidents result in fatalities.
* Over 90% of street racing accidents involve male drivers.
Current Methods used to prevent street racing and drunk driving:
Increased Patrols: Police departments may increase patrols in areas known for street racing to discourage the activity and catch those who engage in it.
Undercover Operations: Police may use unmarked vehicles to blend in with traffic and catch street racers in the act.
Traffic Stops: Officers may pull over drivers suspected of street racing or other traffic violations and issue citations or make arrests.
Helicopter Surveillance: Police helicopters can provide an aerial view of street racing activity and assist ground units in apprehending offenders.
Sting Operations: Police may set up sting operations, posing as street racers, to catch those who are participating in illegal races.
Electronic Monitoring: Police may use radar guns, speed cameras, or other technology to monitor speeds and catch street racers in the act.
Community Involvement: Police may work with community members to report suspicious activity and gather information on illegal street racing. [4]
Target Market:
The target market for DIONYSYS are government services. More specifically, law enforcement. This tool will allow the government to tackle impaired driving and street racing from a reactive approach. A report from the World Health Organization has found that road traffic accidents impose an economic burden, costing most nations approximately 3% of their GDP [5].
According to the latest research study, the demand of global Driver Monitoring System Market size & share was valued at approximately USD 1.9 Billion in 2022 and is expected to reach USD 2.23 Billion in 2023 and is expected to reach a value of around USD 5.27 Billion by 2032, at a compound annual growth rate (CAGR) of about 11.3% during the forecast period 2023 to 2032. [6]
[1] “Drunk Driving | NHTSA.” Accessed: Feb. 03, 2024. [Online]. Available: <https://www.nhtsa.gov/risky-driving/drunk-driving>
[2] “Impaired Driving Statistics – MADD Parkland.” Accessed: Feb. 03, 2024. [Online]. Available: <https://maddchapters.ca/parkland/about-us/impaired-driving-statistics/>
[3] “Impaired Driving: Get the Facts | Transportation Safety | Injury Center | CDC.” Accessed: Feb. 03, 2024. [Online]. Available: <https://www.cdc.gov/transportationsafety/impaired_driving/impaired-drv_factsheet.html>
[4] “Street Racing Accidents: Risks, Statistics, and Prevention - California Times Journal.” Accessed: Feb. 03, 2024. [Online]. Available: <https://californiatimesjournal.com/street-racing-accidents-risks-statistics-and-prevention/>
[5] “Driver Monitoring Systems Market Size, Share, Trend Analysis 2024-2033.” Accessed: Feb. 03, 2024. [Online]. Available: <https://www.thebusinessresearchcompany.com/report/driver-monitoring-systems-global-market-report>
[6] C. M. R. P. LIMITED, “[Latest] Global Driver Monitoring System Market Size/Share Worth USD 5.27 Billion by 2032 at a 11.3% CAGR: Custom Market Insights (Analysis, Outlook, Leaders, Report, Trends, Forecast, Segmentation, Growth, Growth Rate, Value),” GlobeNewswire News Room. Accessed: Feb. 03, 2024. [Online]. Available: <https://www.globenewswire.com/en/news-release/2023/10/31/2770703/0/en/Latest-Global-Driver-Monitoring-System-Market-Size-Share-Worth-USD-5-27-Billion-by-2032-at-a-11-3-CAGR-Custom-Market-Insights-Analysis-Outlook-Leaders-Report-Trends-Forecast-Segmen.html>
## What we learned
The key takeaways from QHacks were learning how to apply math and statistics to AI models and apply it to world problems. The lessons that we learned were to effectively communicate with our team on dividing up roles and responsibilities. For example, our team had two members working on AI and the other two working on research and front-end. This allowed us to finish our model and user interface within the giving time and allowed us to gain experience in software that we have not mastered yet. Overall, some of us went more in-depth with the math and statistics while some of us honed our design skills.
## What's next for DIONYSUS
We’d like to integrate a database to potentially store footage and be able to recognize patterns to improve analysis. This would also allow us to refine the model and ensure a higher degree of accuracy for law enforcement. We ran out of time to use Flask to connect the model directly to our React front end fully, so we’d like to finish that. We’d also like to implement detection which enables to cameras to identify vehicle make and model. This would enable law enforcement to follow the path of a suspected impaired/dangerous driver. A large step ahead is implementing the capability of speed analysis in coordination with the straight line test and Fourier analysis. We also can look into securing government contracts for monetization and implementation!
*If you’re curious or want to learn more about our project, check out the github below and follow the instructions to upload your own video and try it out! Some of us will also be continuing to update the project and take it our own direction so follow along!* | ## Inspiration
This project was inspired by my love of walking. We all need more outdoor time, but people often feel like walking is pointless unless they have somewhere to go. I have fond memories of spending hours walking around just to play Pokemon Go, so I wanted to create something that would give people a reason to go somewhere new. I envision friends and family sending mystery locations to their loved ones with a secret message, picture, or video that will be revealed when they arrive. You could send them to a historical landmark, a beautiful park, or just like a neat rock you saw somewhere. The possibilities are endless!
## What it does
You want to go out for a walk, but where to? SparkWalk offers users their choice of exciting "mystery walks". Given a secret location, the app tells you which direction to go and roughly how long it will take. When you get close to your destination, the app welcomes you with a message. For now, SparkWalk has just a few preset messages and locations, but the ability for users to add their own and share them with others is coming soon.
## How we built it
SparkWalk was created using Expo for React Native. The map and location functionalities were implemented using the react-native-maps, expo-location, and geolib libraries.
## Challenges we ran into
Styling components for different devices is always tricky! Unfortunately, I didn't have time to ensure the styling works on every device, but it works well on at least one iOS and one Android device that I tested it on.
## Accomplishments that we're proud of
This is my first time using geolocation and integrating a map, so I'm proud that I was able to make it work.
## What we learned
I've learned a lot more about how to work with React Native, especially using state and effect hooks.
## What's next for SparkWalk
Next, I plan to add user authentication and the ability to add friends and send locations to each other. Users will be able to store messages for their friends that are tied to specific locations. I'll add a backend server and a database to host saved locations and messages. I also want to add reward cards for visiting locations that can be saved to the user's profile and reviewed later. Eventually, I'll publish the app so anyone can use it! | # Hawkeye
Hawkeye is a real time multimodal conversation and interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to healthcare professionals in forming new and unique interactions with the world around them.
## What it does
The core of Hawkeye is powered by its conversation-action engine. Using audio and visual inputs from the real world, all decisions and movements made by Hawkeye are generated and inferenced on the fly in near real time. For instance, when faced with a command like "Hey Spot, can you move a little closer?” Hawkeye digests the task on hand to build step-by-step instructions for the robot to follow. This means that Hawkeye is able to dynamically adapt, change, and learn with new environments. It knows when and how to improve its vantage point, orchestrate complex maneuvers, and advance closer to its target. Hawkeye flawlessly navigates through any environment, all while avoiding a reliance on pre-coded movement patterns.
## How we built it
Hawkeye is powered by OpenAI's experimental GPT-4 Turbo and Vision AI models, alongside Spot’s movement SDK. On initialization, Hawkeye uses speech-to-text to wait for an interaction from the user, where it then determines the type and intention of the request. Then, relevant information is fed into OpenAI’s GPT4-Turbo and Vision to generate a chain of command for the robot to follow, with the ability to map out its actions and navigate to the goal. Incrementally, Hawkeye reevaluates and replans its movements to eventually reach its intended goal.
## Challenges we ran into
Our main challenge with developing on the Spot is finding ways to rapidly build and prototype new features without having to rebuild the entire project on every change. As such, we developed a custom command server to push to push, patch, and deploy new code over websockets without needing a reboot. This rapidly increased our rate of innovation with the Spot robot, giving us the flexibility and maneuverability to make our project a reality.
Another main challenge was determining what kind of command a phrase is when it’s first received from the mic. Since there are “movement”, “image processing”, and “general commands”, all with their own functionality, it was difficult coming up with a simple way to classify each command to be one of the three. We resolved this by creating an input delegate which makes an API call to OpenAI, and makes a judgment on what classification it thinks the command is.
## Accomplishments that we're proud of
Learning how to engineer prompts and rapidly iterate to make it more accurate was a feat that we're proud of! The learning curve for Spot's SDK was quite steep and we're happy to have come out of it with minimal team friction. We also got it to dance in the last hour by just telling it to dance, which was very satisfying. Not breaking the $75k robot was relieving as well!
## What we learned
By using Spot's and its SDK, we worked with a lot of hardware which we otherwise would never have access to. We learnt a lot about the physical sensors and I/O onboard. Moreover, it was the first time working with facial and object recognition libraries for a lot of us. Finally, we became much more proficient with GPT APIs.
## What's next for Hawkeye
Looking ahead, we envision further refining Hawkeye by incorporating dynamic object tracking along with facial recognition tech to improve the breadth of human-assistance tasks. This would allow it to dynamically change its motion to avoid moving obstacles (e.g., people in a moving crowd). Spot has huge potential as a guide dog once combined with AI, being able to talk to and guide its owner — with Hawkeye’s interaction software, we believe this level of robotics collaboration will shape the future.
## Technologies Used
* Spot® - The Agile Mobile Robot
* Spot SDK for control over hardware
* Websockets for our dynamic code launcher
* Python as language of choice
* Docker for easy containerization
* GPT-4's vision & turbo preview models | partial |
## Inspiration 💡
Do your eyes ever feel strained and dry after hours and hours spent staring at screens? Has your eye doctor ever told you about the 20-20-20 rule? Good thing we’ve automated it for you along with personalized analysis of your eye activity using AdHawk’s eye tracking device.
The awesome AdHawk demos blew us away, and we were inspired by its seemingly subtle, but powerful features: it could track the user's gaze in three dimensions, recognize blink events, and has an external camera. We knew that our goal to remedy this healthcare crisis could be achieved with AdHawk.
## What it does 💻
Poor eye health has become an increasingly important issue in today’s digital world and we want to help. While you’re working at your desktop, you’ll wear the wonderful Adhawk glasses. Every 20 minutes or so, our connected app will alert you to look away for a 20 second eye break. With the eye tracking, you’ll be forced to look at least 20 feet–otherwise, the timer pauses.
We also made an eye exercise game available to play where you move a ball around to hit cubes randomly placed on the screen using your eyes. This engages the eye muscles in a fun and exciting way to improve eye tracking, eye teaming and myopia.
## How we built it 🛠️
Our frontend uses React.js & Styled Components and React Three Fiber for the eye exercise game. Our backend uses Python via AdHawk's SDK with Flask and Firebase for our database.
## Challenges we ran into ⛰️
Setting up the glasses to detect the depth of our sight accurately was difficult as this was the key metric to ensure the user was taking a 20 feet eye break for 20 seconds. As well, connecting this data to the frontend was a bit of a challenge. However, with our Flask and React tech stack, it was an easy, streamlined integration.
As well, we wanted to record analytics of our user’s screen time by taking any instances where their viewing distance was closer than a certain amount. It would give a user a chance to gauge their eye health and better understand their true viewing habits. This was a bit of a challenge as it was our first time using CockroachDB.
## Accomplishments that we're proud of 🏅
As coders and avid tech users, we are proud to have built a functioning app that we would actually use in our lives. Many of us personally struggle with vision problems and Visionary makes it so easy to help reduce these issues, whether it's myopia or eye strain. We’re super proud of the frontend, and the fact that we were able to incorporate the incredible Adhawk glasses into our project successfully.
## What we learned 📚
Start small and dream big. We ensured that the glasses would be able to track viewing distance and send that data to our frontend first before moving on to other features, like a landing page, data analytics, and our database setup.
## What's next for Visionary 🥅
We would love to incorporate other use cases for the Adhawk glasses, including more guided eye exercises with eye tracking, focus tracking by ensuring that the user’s eyes stay on screen, and so much more. Customized settings are also a next step. Visionary would also make for an awesome mobile app so that users can further reduce eye strain on their phones and tablets. The possibilities are truly, truly endless. | ## Inspiration
Our team focuses on real-world problems. One of our own classmates is blind, and we've witnessed firsthand the difficulties he encounters during lectures, particularly when it comes to accessing the information presented on the board.
It's a powerful reminder that innovation isn't just about creating flashy technology; it's about making a tangible impact on people's lives. "Hawkeye" isn't a theoretical concept; it's a practical solution born from a genuine need.
## What it does
"Hawkeye" utilizes Adhawk MindLink to provide essential visual information to the blind and visually impaired. Our application offers a wide range of functions:
* **Text Recognition**: "Hawkeye" can read aloud whiteboard text, screens, and all text that our users would not otherwise see.
* **Object Identification**: The application identifies text and objects in the user's environment, providing information about their size, shape, and position.
* **Answering Questions** Hawkeye takes the place of Google for the visually impaired, using pure voice commands to search
## How we built it
We built "Hawkeye" by combining state-of-the-art computer vision and natural language processing algorithms with the Adhawk MindLink hardware. The development process involved several key steps:
1. **Data Collection**: We used open-source AI models to recognize and describe text and object elements accurately.
2. **Input System**: We developed a user-friendly voice input system that can be picked up by anyone.
3. **Testing and Feedback**: Extensive testing and consultation with the AdHawk team was conducted to fine-tune the application's performance and usability.
## Challenges we ran into
Building "Hawkeye" presented several challenges:
* **Real-time Processing**: We knew that real-time processing of so much data on a wearable device was possible, but did not know how much latency there would be. Fortunately, with many optimizations, we were able to get the processing to acceptable speeds.
* **Model Accuracy**: Ensuring high accuracy in text and object recognition, as well as facial recognition, required continuous refinement of our AI models.
* **Hardware Compatibility**: Adapting our software to work effectively with Adhawk MindLink's hardware posed compatibility challenges that we had to overcome.
## Accomplishments that we're proud of
We're immensely proud of what "Hawkeye" represents and the impact it can have on the lives of blind and visually impaired individuals. Our accomplishments include:
* **Empowerment**: Providing a tool that enhances the independence and quality of life for visually impaired individuals. To no longer rely upon transcribers and assistants is something that real
* **Inclusivity**: Breaking down barriers to education and employment, making these opportunities more accessible.
* **Innovation**: Combining cutting-edge technology and AI to create a groundbreaking solution for a pressing societal issue.
* **User-Centric Design**: Prioritizing user feedback and needs throughout the development process to create a genuinely user-friendly application.
## What we learned
Throughout the development of "Hawkeye," we learned valuable lessons about the power of technology to transform lives. Key takeaways include:
* **Empathy**: Understanding the daily challenges faced by visually impaired individuals deepened our empathy and commitment to creating inclusive technology.
* **Technical Skills**: We honed our skills in computer vision, natural language processing, and hardware-software integration.
* **Ethical Considerations**: We gained insights into the ethical implications of AI technology, especially in areas like facial recognition.
* **Collaboration**: Effective teamwork and collaboration were instrumental in overcoming challenges and achieving our goals.
## What's next for Hawkeye
The journey for "Hawkeye" doesn't end here. In the future, we plan to:
* **Expand Functionality**: We aim to enhance "Hawkeye" by adding new features and capabilities, such as enhanced indoor navigation and support for more languages.
* **Accessibility**: We will continue to improve the user experience, ensuring that "Hawkeye" is accessible to as many people as possible.
* **Partnerships**: Collaborate with organizations and institutions to integrate "Hawkeye" into educational and workplace environments.
* **Advocacy**: Raise awareness about the importance of inclusive technology and advocate for its widespread adoption.
* **Community Engagement**: Foster a supportive user community for sharing experiences, ideas, and feedback to further improve "Hawkeye."
With "Hawkeye," our vision is to create a more inclusive and accessible world, where visual impairment is no longer a barrier to achieving one's dreams and aspirations. Together, we can make this vision a reality. | ## Inspiration
Have you ever wanted to go tandem biking but lacked the most important thing, a partner? We realize that this is a widespread issue and hope to help tandem bikers of all kinds, including both recreational users and those who seek commuting solutions.
## What it does
Our project helps you find a suitable tandem biking partner using a tinder-like system. We develop routes and the program also includes features such as playlist curation and smart conversation starters.
## How we built it
The web app was developed using django.
## Challenges we ran into
Developing an actual swipe mechanism was fairly difficult.
## Accomplishments that we're proud of
We're proud of our members' ingenuity in developing solutions utilizing django, an interface with which we all lacked experience.
## What we learned
We learned that tons of people are actually really interested in tandem biking and would download the app.
## What's next for Tander
Three. Seat. Tandem. Bike. | partial |
## Inspiration
The inspiration for Leyline came from [The Interaction Company of California's challenge](https://interaction.co/calhacks) to design an AI agent that proactively handles emails. While Google's AI summarization features in their email app were a good start, we felt they lacked the comprehensive functionality users need. We envisioned an agent that could handle more complex tasks like filling out forms and booking flights, truly taking care of the busy work in your inbox.
## What it does
Leyline connects to your Gmail account and parses incoming emails. It uses AI models to analyze the subject, body text, and attachments to determine the best course of action for each email. For example, if there's a form attached with instructions to fill it out, Leyline completes the form for you using information from our database.
## How we built it
We developed Leyline as a Next.js app with Supabase as the backend. We use Google Cloud Pub/Sub to connect to Gmail accounts. A custom webhook receives emails from the Pub/Sub topic. The webhook first summarizes the email, then decides if there's a possible tool it could use. We implemented Groq's tool-calling models to determine appropriate actions, such as filling out attached forms using user data from our database. The frontend was built with Tailwind CSS and shadcn UI components.
## Challenges we ran into
1. Processing Gmail messages correctly was challenging due to conflicting documentation and inconsistent data formats. We had to implement workarounds and additional API calls to fetch all the necessary information.
2. Getting the tool-calling workflow to function properly was difficult, as it sometimes failed to call tools or missed steps in the process.
## Accomplishments that we're proud of
* We created an intuitive and visually appealing user interface.
* We implemented real-time email support, allowing emails to appear instantly and their processing status to update live.
* We developed a functional tool workflow for handling various email tasks.
## What we learned
* This was our first time using Google Cloud Pub/Sub, which taught us about the unique characteristics of cloud provider APIs compared to consumer APIs.
* We gained experience with tool-calling in AI models and learned about its intricacies.
## What's next for Leyline
We plan to expand Leyline's capabilities by:
* Adding more tools to handle a wider variety of email-related tasks.
* Implementing a containerized browser to perform tasks like flight check-ins.
* Supporting additional email providers beyond Gmail.
* Developing more specialized tools for processing different types of emails and requests. | This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | We went on relentlessly. Beating back against the tide (that being lack of documentation), we created something truly magnificent out of an underused device. What's next you ask? Well, we'll go on with our lives, but in the back of our minds we will always remember the times we had, and the beauty we effected in this sorry, sullen world. | winning |
## Virality Pro: 95% reduced content production costs, 2.5x rate of going viral, 4 high ticket clients
We’re already helping companies go viral on instagram & TikTok, slash the need for large ad spend, and propel unparalleled growth at a 20x lower price.
## The problem: growing a company is **HARD and EXPENSIVE**
Here are the current ways companies grow reliably:
1. **Facebook ads / Google Ads**: Expensive Paid Ads
Producing ads often cost $2K - $10K+
Customer acquisition cost on Facebook can be as much as $100+, with clicks being as high as $10 on google ads
Simply untenable for lower-ticket products
2. **Organic Social Media**: Slow growth
Takes a long time and can be unreliable; some brands just cannot grow
Content production, posting, and effective social media management is expensive
Low engagement rates even at 100K+ followers, and hard to stay consistent
## Solution: Going viral with Virality Pro, Complete Done-For-You Viral Marketing
Brands and startups need the potential for explosive growth without needing to spend $5K+ on marketing agencies, $20K+ on ad spend, and getting a headache hiring and managing middle management.
We take care of everything so that you just give us your company name and product, and we manage everything from there.
The solution: **viral social media content at scale**.
Using our AI-assisted system, we can produce content following the form of proven viral videos at scale for brands to enable **consistent** posting with **rapid** growth.
## Other brands: Spends $5K to produce an ad, $20K on ad spend.
They have extremely thin margins with unprofitable growth.
## With Virality Pro: $30-50 per video, 0 ad spend, produced reliably for fast viral growth
Professional marketers and marketing agencies cost hundreds of thousands of dollars per year.
With Virality Pro, we can churn out **400% more content for 5 times less.**
This content can easily get 100,000+ views on tik tok and instagram for under $1000, while the same level of engagement would cost 20x more traditionally.
## Startups, Profitable Companies, and Brands use Virality Pro to grow
Our viral videos drive growth for early to medium-sized startups and companies, providing them a lifeline to expand rapidly.
## 4 clients use Virality Pro and are working with us for growth
1. **Minute Land** is looking to use Virality Pro to consistently produce ads, scaling to **$400K+** through viral videos off $0 in ad spend
2. **Ivy Roots Consulting** is looking to use Virality Pro to scale their college consulting business in a way that is profitable **without the need for VC money**. Instead of $100 CAC through paid ads, the costs with Virality Pro are close to 0 at scale.
3. **Manifold** is looking to use Virality Pro to go viral on social media over and over again to promote their new products without needing to hire a marketing department
4. **Yoodli** is looking to use Virality Pro to manage rapid social media growth on TikTok/Instagram without the need to expend limited funding for hiring middle managers and content producers to take on headache-inducing media projects
## Our team: Founders with multiple exits, Stanford CS+Math, University of Cambridge engineers
Our team consists of the best of the best, including Stanford CS/Math experts with Jane Street experience, founders with multiple large-scale exits multiple times, Singaporean top engineers making hundreds of thousands of dollars through past ventures, and a Cambridge student selected as the top dozen computer scientists in the entire UK.
## Business Model
Our pricing system charges $1900 per month for our base plan (5 videos per week), with our highest value plan being $9500 per month (8 videos per day).
With our projected goal of 100 customers within the next 6 months, we can make $400K in MRR with the average client paying $4K per month.
## How our system works
Our technology is split into two sectors: semi-automated production and fully-automated production.
Currently, our main offer is semi-automated production, with the fully-automated content creation sequence still in production.
## Semi-Automated AI-Powered Production Technology
We utilize a series of templates built around prompt engineering and fine-tuning models to create a large variety of content for companies around a single format.
We then scale the number of templates currently available to be able to produce hundreds and thousands of videos for a single brand off of many dozens of formats, each with the potential to go viral (having gone viral in the past).
## Creating the scripts and audios
Our template system uses AI to produce the scripts and the on-screen text, which is then fed into a database system. Here, a marketing expert verifies these scripts and adjusts them to improve its viral nature. For each template, a series of seperate audios are given as options and scripts are built around it.
## Sourcing Footage
For each client, we source a large database of footage found through filmed clips, AI-generated video, motion-graphic images, and taking large videos on youtube and using software to break them down into small clips, each representing a shot.
## Text to Speech
We use realistic-sounding AI voices and default AI voices to power the audio. This has proven to work in the past and can be produced consistently at scale.
## Stitching it all together
Using our system, we then compile the footage, text script, and audio into one streamlined sequence, after which it can be reviewed and posted onto social media.
## All done within 5 to 15 minutes per video
Instead of taking hours, we can get it done in **5 to 15 minutes**, which we are continuing to shave down.
## Fully Automated System
Our fully automated system is a work in progress that removes the need for human interaction and fully automates the video production, text creation, and other components, stitched together without the need for anyone to be involved in the process.
## Building the Fully Automated AI System
Our project was built employing Reflex for web development, OpenAI for language model integration, and DALL-E for image generation. Utilizing Prompt Engineering alongside FFmpeg, we synthesized relevant images to enhance our business narrative.
## Challenges Faced
Challenges encountered included slow Wi-Fi, the steep learning curve with Prompt Engineering and adapting to Reflex, diverging from conventional frameworks like React or Next.js for web application development.
## Future of Virality Pro
We are continuing to innovate our fully-automated production system and create further templates for our semi-automated systems. We hope that we can reduce the costs of production on our backend and increase the growth.
## Projections
We project to scale to 100 clients in 6 months to produce $400K in Monthly Recurring Revenue, and within a year, scale to 500 clients for $1.5M in MRR. | ## Inspiration
In today's fast-paced digital world, creating engaging social media content can be time-consuming and challenging. We developed Expresso to empower content creators, marketers, and businesses to streamline their social media workflow without compromising on quality or creativity.
## What it does
Expresso is an Adobe Express plugin that revolutionizes the process of creating and optimizing social media posts. It offers:
1. Intuitive Workflow System: Simplifies the content creation process from ideation to publication.
2. AI-Powered Attention Optimization: Utilizes a human attention (saliency) model (SUM) to provide feedback on maximizing post engagement.
3. Customizable Feedback Loop: Allows users to configure iterative feedback based on their specific needs and audience.
4. Task Automation: Streamlines common tasks like post captioning and scheduling.
## How we built it
We leveraged a powerful tech stack to bring Expresso to life:
* React: For building a responsive and interactive user interface
* PyTorch: To implement our AI-driven attention optimization model
* Flask: To create a robust backend API
## Challenges we ran into
Some of the key challenges we faced included:
* Integrating the SUM model seamlessly into the Adobe Express environment
* Optimizing the AI feedback loop for real-time performance
* Ensuring cross-platform compatibility and responsiveness
## Accomplishments that we're proud of
* Successfully implementing a state-of-the-art human attention model
* Creating an intuitive user interface that simplifies complex workflows
* Developing a system that provides actionable, AI-driven insights for content optimization
## What we learned
Throughout this project, we gained valuable insights into:
* Adobe Express plugin development
* Integrating AI models into practical applications
* Balancing automation with user control in creative processes
## What's next for Expresso
We're excited about the future of Expresso and plan to:
1. Expand our AI capabilities to include trend analysis and content recommendations
2. Integrate with more social media platforms for seamless multi-channel publishing
3. Develop advanced analytics to track post performance and refine optimization strategies
Try Expresso today and transform your design and marketing workflow! | ## Inspiration
Last year we did a project with our university looking to optimize the implementation of renewable energy sources for residential homes. Specifically, we determined the best designs for home turbines given different environments. In this project, we decided to take this idea of optimizing the implementation of home power further.
## What it does
A web application allows users to enter an address and determine if installing a backyard wind turbine or solar panel is more profitable/productive for their location.
## How we built it
Using an HTML front-end we send the user's address to a python flask back end where we use a combination of external APIs, web scraping, researched equations, and our own logic and math to predict how the selected piece of technology will perform.
## Challenges we ran into
We were hoping to use Google's Earth Engine to gather climate data, but were never approved fro the $25 credit so we had to find alternatives. There aren't alot of good options to gather the nessesary solar and wind data, so we had to use a combination of API's and web scraping to gather the required data which ended up being a bit more convulted than we hoped. Also integrating the back-end with the front-end was very difficult because we don't have much experience with full-stack development working end to end.
## Accomplishments that we're proud of
We spent a lot of time coming up with idea for EcoEnergy and we really think it has potential. Home renewable energy sources are quite an investment, so having a tool like this really highlights the benefits and should incentivize people to buy them. We also think it's a great way to try to popularize at-home wind turbine systems by directly comparing them to the output of a solar panel because depending on the location it can be a better investment.
## What we learned
During this project we learned how to predict the power output of solar panels and wind turbines based on windspeed and sunlight duration. We learned how to combine a back-end built in python to a front-end built in HTML using flask. We learned even more random stuff about optimizing wind turbine placement so we could recommend different turbines depending on location.
## What's next for EcoEnergy
The next step for EcoEnergy would be to improve the integration between the front and back end. As well as find ways to gather more location based climate data which would allow EcoEnergy to predict power generation with greater accuracy. | winning |
## Inspiration
I, Jennifer Wong, went through many mental health hurdles and struggled to get the specific help that I needed. I was fortunate to find a relatable therapist that gave me an educational understanding of my mental health, which helped me understand my past and accept it. I was able to afford this access to mental health care and connect with a therapist similar to me, but that's not the case for many racial minorities. I saw the power of mental health education and wanted to spread it to others.
## What it does
Takes a personalized assessment of your background and mental health in order to provide a curated and self-guided educational program based on your cultural experiences.
You can journal about your reflections as you learn through watching each video. Videos are curated as an MVP, but eventually, we want to hire therapists to create these educational videos.
## How we built it
## Challenges we ran into
* We had our engineers drop the project or couldn't attend the working sessions, so there were issues with the workload. Also, there were issues with technical feasibility since knowledge on Swift was limited.
## Accomplishments that we're proud of
Proud that overall, we were able to create a fully functioning app that still achieves our mission. We were happy to get the journal tool completed, which was the most complicated.
## What we learned
We learned how to cut scope when we lost engineers on the team.
## What's next for Empathie (iOS)
We will get more customer validation about the problem and see if our idea resonates with people. We are currently getting feedback from therapists who work with people of color.
In the future, we would love to partner with schools to provide these types of self-guided services since there's a shortage of therapists, especially for underserved school districts. | ## Inspiration
**A lot of people have stressful things on their mind right now.** [According to a Boston University study, "depression symptom prevalence was more than 3-fold higher during the COVID-19 pandemic than before."](https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2770146)
Sometimes it’s hard to sleep or get a good night’s rest because of what happened that day. If you say or write everything down, it helps get it out of your mind so you’re not constantly thinking about it. Diaries take a long time to write in, and sometimes you want to talk. **Voice diaries aren't common and they are quicker and easier to use than a real diary.**
## What it does
When you're too tired to write out your thoughts at the end of the day, **you can simply talk aloud and our app will write it down for you, easy right?**
Nite Write asks you questions to get you thinking about your day. It listens to you while you speak. You can take breaks and continue speaking to the app. You can go back and look at old posts and reflect on your days!
## How we built it
* We used **Figma** to plan out the design and flow of our web app.
* We used **WebSpeech API** and **JavaScript** to hook up the speech-to-text transcription.
* We used **HTML** and **CSS** for the front-end of the web app.
* And lastly, we used **Flask** to put the entire app together.
## Challenges we ran into
Our first challenge was understanding how to use Flask both in its usage of routes, templates, and syntax. Another challenge was the lack of time and integrating the different parts of the app because we're virtual. It was difficult to coordinate and use our time efficiently since we lived all over the country in different timezones.
## Accomplishments that we're proud of
**We are proud of being able to come together virtually to address this problem we all had!**
## What we learned
We learned how to use Flask, WebSpeech API, and CSS. We also learned how to put together a demo with slides, work together virtually.
## What's next for Nite Write?
* Show summaries and trends on a person's most frequent entry topics or emotion
* Search feature that filters your diary entries based on certain words you used
* Light Mode feature
* Ability to sort entries based on topic/etc | ## Inspiration
The number of people who struggle with mental health is greater then ever, and is continuing to grow. We both personally know our closest friends who struggle with mental health issues, and we want to know what the best thing is that we can do for them. Developing a platform where people can express how they like to be treated during times of high stress, as well as be notified about what your potential mental health cycle could be, is what Espect solves.
## What it does
Espect monitors one's day to day mental health level. This is done by inputting 6 different categories of mental health influences, and then uses machine learning to determine what ones mental state would look like in the near future. It is also used as a platform for people to share what is best for them, in terms of what other people can do, in order to ensure the best treatment and resources.
## How we built it
We used Microsofts Azure to develop the machine learning algorithm, and used Apple's Xcode for the app interface. Stop by the demo or check out the GitHub to learn more.
## Challenges we ran into
Some challenges we ran into were getting the back-end (Python, Azure) to connect with the front-end (Swift, iOS). The demo will be done using 2 machines due to this. Some other challenges we overcame were having to pivot and change directions completely about 6 hours in to the hackathon. Plus, all of the languages we worked with, we had no prior experience with, but it was a fun learning experience.
## Accomplishments that we're proud of
Using Illustrator, Azure and machine learning was really cool. We also learned Swift and Python over the course of this challenge. We're very happy with the way this turned out.
## What we learned
We learned how to use software such as Illustrator and Azure as well as app development and implementation of machine learning algorithms. We also learned about time management at a hackathon and how to have good communication between front-end and back-end devs.
## What's next for ESPECT
We hope we can get the app integration working and hopefully get an Android version working as well. As for features, we hope to incorporate passive data such as weather, news and local politics into our inputs, as well as things such as Fitbit/smart watch data. This will help diversify our inputs in case the user forgets/is unable/doesn't want to enter data for the day.
Come chat with us if you have any questions :) | partial |
## Inspiration
Public washrooms can be a rough time for users thanks to lots of traffic, clogged toilets, missing toilet paper and more. There’s rarely opportunities to easily provide feedback on these conditions. For management, washrooms often get cleaned too frequently or rarely at all. Let’s bring simple sensors to track the status of frequently used facilities to optimize cleaning for businesses and help users enjoy their shit.
## What it does
System of IoT sensors, opportunity for gesture-based user feedback, streamed to a live dashboard to give management updates
We track:
-Traffic into washroom
-Methane levels
-Fullness of trash cans
We provide:
Refill buttons in each stall
Prompt for facility rating using Leap Motion gesture tracking
## How we built it
Used Snapdragon to attach three sensors for data collection purpose to track cleanliness in washroom
Used Leap Motion to create interactive dashboard to rate they experience through gestures - more sanitary and futuristic approach, incentivizing users to participate
All data pushed up to Freeboard where management can see the status of their washrooms
## Challenges we ran into
Tricky using documentation with Leap Motion, didn’t take into account web apps need time to load, wouldn’t load body; also challenging to build hand recognition from the ground up
Getting hardware to work: Snapdragon with shield doesn’t have documentation - normal arduino code wasn’t working
Integrating data with IoT dashboard
## Accomplishments that we're proud of
Found cool solution to get data from sensors to dashboard: wrote to serial, read that,
used a python script, and then used dweet.io to publish to freeboard
Building hand recognition in Leap Motion
## What we learned
Learned to use Leap Motion
Live data transmission with multiple sensors
Integrating a ton of IoT data
## What's next for Facile
Store data online so that we can use ML to understand trends in facility behaviour, cluster types of facilities to better estimate cleaning times
Talk to facilities management staff - figure out current way they rate and clean their washrooms, base sensors on that
Partnering with large malls, municipal governments, and office complexes to see if they’d be interested
Applicable to other environments in smart cities beyond bathrooms | ## Inspiration
We were inspired to create a health-based solution (despite focusing on sustainability) due to the recent trend of healthcare digitization, spawning from the COVID-19 pandemic and progressing rapidly with increased commercial usage of AI. We did, however, want to create a meaningful solution with a large enough impact that we could go through the hackathon, motivated, and with a clear goal in mind. After a few days of research and project discussions/refinement sessions, we finally came up with a solution that we felt was not only implementable (with our current skills), but also dealt with a pressing environmental/human interest problem.
## What it does
WasteWizard is designed to be used by two types of hospital users: Custodians and Admin. At the custodian user level, alerts are sent based on timer countdowns to check on wastebin statuses in hospital rooms. When room waste bins need to be emptied, there is an option to select the type of waste and the current room to locate the nearest large bin. Wastebin status (for that room) is then updated to Empty. On the admin side, there is a dashboard to track custodian wastebin cleaning logs (by time, location, and type of waste), large bin status, and overall aggregate data to analyze their waste output. Finally, there is also an option for the admin to empty large garbage bins (once collected by partnering waste management companies) to update their status.
## How we built it
The UI/UX designers employed Figma keeping user intuitiveness in mind. Meanwhile, the backend was developed using Node.js and Express.js, employing JavaScript for server-side scripting. MongoDB served as the database, and Mongoose simplified interactions with MongoDB by defining schemas. A crucial aspect of our project was using the MappedIn SDK for indoor navigation. For authentication and authorization, the developers used Auth0 which greatly enhanced security. The development workflow followed agile principles, incorporating version control for collaboration. Thorough testing at both front-end and back-end levels ensured functionality and security. The final deployment in Azure optimized performance and scalability.
## Challenges we ran into
There were a few challenges we had to work through:
* MappedIn SDK integration/embedding: we used a front-end system that, while technically compatible, was not the best choice to use with MappedIn SDK so we ended up needing to debug some rather interesting issues
* Front End development, in general, was not any of our strong suits, so much of that phase of the project required us to switch between CSS tutorial tabs and our coding screens, which led us to taking more time than expected to finish that up
* Auth0 token issues related to redirecting users and logging out users after the end of a session + redirecting them to the correct routes
* Needing to pare down our project idea to limit the scope to an idea that we could feasibly build in 24 hours while making sure we could defend it in a project pitch as an impactful idea with potential future growth
## Accomplishments that we're proud of
In general, we're all quite proud at essentially full-stack developing a working software project in 24 hours. We're also pretty proud of our project idea, as our initial instinct was to pick broad, flashy projects that were either fairly generic or completely unbuildable in the given time frame. We managed to set realistic goals for ourselves and we feel that our project idea is niche and applicable enough to have potential outside of a hackathon environment. Finally, we're proud of our front-end build. As mentioned earlier, none of us are especially well-versed in front-end, so having our system be able to speak to its user (and have it look good) is a major success in our books.
## What we learned
We learned we suck at CSS! We also learned good project time management/task allocation and to plan for the worst as we were quite optimistic about how long it would take us to finish the project, but ended up needing much more time to troubleshoot and deal with our weak points. Furthermore, I think we all learned new skills in our development streams, as we aimed to integrate as many hackathon-featured technologies as possible. There was also an incredible amount of research that went into coming up with this project idea and defining our niche, so I think we all learned something new about biomedical waste management.
## What's next for WasteWizard
As we worked through our scope, we had to cut out a few ideas to make sure we had a reasonable project within the given time frame and set those aside for future implementation. Here are some of those ideas:
* more accurate trash empty scheduling based on data aggregation + predictive modelling
* methods of monitoring waste bin status through weight sensors
* integration into hospital inventory/ordering databases
As a note, this can be adapted to any biomedical waste-producing environment, not just hospitals (such as labs and private practice clinics). | ## Inspiration
A couple of weeks ago, 3 of us met up at a new Italian restaurant and we started going over the menu. It became very clear to us that there were a lot of options, but also a lot of them didn't match our dietary requirements. And so, we though of Easy Eats, a solution that analyzes the menu for you, to show you what options are available to you without the dissapointment.
## What it does
You first start by signing up to our service through the web app, set your preferences and link your phone number. Then, any time you're out (or even if you're deciding on a place to go) just pull up the Easy Eats contact and send a picture of the menu via text - No internet required!
Easy Eats then does the hard work of going through the menu and comparing the items with your preferences, and highlights options that it thinks you would like, dislike and love!
It then returns the menu to you, and saves you time when deciding your next meal.
Even if you don't have any dietary restricitons, by sharing your preferences Easy Eats will learn what foods you like and suggest better meals and restaurants.
## How we built it
The heart of Easy Eats lies on the Google Cloud Platform (GCP), and the soul is offered by Twilio.
The user interacts with Twilio's APIs by sending and recieving messages, Twilio also initiates some of the API calls that are directed to GCP through Twilio's serverless functions. The user can also interact with Easy Eats through Twilio's chat function or REST APIs that connect to the front end.
In the background, Easy Eats uses Firestore to store user information, and Cloud Storage buckets to store all images+links sent to the platform. From there the images/PDFs are parsed using either the OCR engine or Vision AI API (OCR works better with PDFs whereas Vision AI is more accurate when used on images). Then, the data is passed through the NLP engine (customized for food) to find synonym for popular dietary restrictions (such as Pork byproducts: Salami, Ham, ...).
Finally, App Engine glues everything together by hosting the frontend and the backend on its servers.
## Challenges we ran into
This was the first hackathon for a couple of us, but also the first time for any of us to use Twilio. That proved a little hard to work with as we misunderstood the difference between Twilio Serverless Functions and the Twilio SDK for use on an express server. We ended up getting lost in the wrong documentation, scratching our heads for hours until we were able to fix the API calls.
Further, with so many moving parts a few of the integrations were very difficult to work with, especially when having to re-download + reupload files, taking valuable time from the end user.
## Accomplishments that we're proud of
Overall we built a solid system that connects Twilio, GCP, a back end, Front end and a database and provides a seamless experience. There is no dependency on the user either, they just send a text message from any device and the system does the work.
It's also special to us as we personally found it hard to find good restaurants that match our dietary restrictions, it also made us realize just how many foods have different names that one would normally google.
## What's next for Easy Eats
We plan on continuing development by suggesting local restaurants that are well suited for the end user. This would also allow us to monetize the platform by giving paid-priority to some restaurants.
There's also a lot to be improved in terms of code efficiency (I think we have O(n4) in one of the functions ahah...) to make this a smoother experience.
Easy Eats will change restaurant dining as we know it. Easy Eats will expand its services and continue to make life easier for people, looking to provide local suggestions based on your preference. | partial |
## Inspiration
We've always wanted to be able to point our phone at an object and know what that object is in another language. So we built that app.
## What it does
Point your phone's camera towards an object, and it will identify that object for you, using the Inception neural network. We translate the object from a source language (English) to a target language, usually a language that the user wants to learn using Google Translation API. Using AR Kit, we depict the image name, in both English and a foreign language, on top of the object. In order to help you find the word, we help you see some different ways of using that word in a sentence.
All in all, the app is a great resource for learning how to pronounce and learn about different objects in different languages.
## How we built it
We built the frontend mobile app in Swift, used AR Kit to place words on top of an object, and used Google Cloud functions to access APIs.
## Challenges we ran into
Dealing with Swift frontend frames, and getting authentication keys to work properly for APIs.
## Accomplishments that we're proud of
We built an app looks awesome with AR Kit, and has great functionality. We took an app idea and worked together to make it come to life.
## What we learned
We learned in greater depth how Swift 4 works, how to use AR Kit, and how easy it is to use Google Cloud functions to offload a server-like computation away from your app without having to set up a server.
## What's next for TranslateAR
IPO in December | ## Inspiration
Since the beginning of the hackathon, all of us were interested in building something related to helping the community. Initially we began with the idea of a trash bot, but quickly realized the scope of the project would make it unrealistic. We eventually decided to work on a project that would help ease the burden on both the teachers and students through technologies that not only make learning new things easier and more approachable, but also giving teachers more opportunities to interact and learn about their students.
## What it does
We built a Google Action that gives Google Assistant the ability to help the user learn a new language by quizzing the user on words of several languages, including Spanish and Mandarin. In addition to the Google Action, we also built a very PRETTY user interface that allows a user to add new words to the teacher's dictionary.
## How we built it
The Google Action was built using the Google DialogFlow Console. We designed a number of intents for the Action and implemented robust server code in Node.js and a Firebase database to control the behavior of Google Assistant. The PRETTY user interface to insert new words into the dictionary was built using React.js along with the same Firebase database.
## Challenges we ran into
We initially wanted to implement this project by using both Android Things and a Google Home. The Google Home would control verbal interaction and the Android Things screen would display visual information, helping with the user's experience. However, we had difficulty with both components, and we eventually decided to focus more on improving the user's experience through the Google Assistant itself rather than through external hardware. We also wanted to interface with Android things display to show words on screen, to strengthen the ability to read and write. An interface is easy to code, but a PRETTY interface is not.
## Accomplishments that we're proud of
None of the members of our group were at all familiar with any natural language parsing, interactive project. Yet, despite all the early and late bumps in the road, we were still able to create a robust, interactive, and useful piece of software. We all second guessed our ability to accomplish this project several times through this process, but we persevered and built something we're all proud of. And did we mention again that our interface is PRETTY and approachable?
Yes, we are THAT proud of our interface.
## What we learned
None of the members of our group were familiar with any aspects of this project. As a result, we all learned a substantial amount about natural language processing, serverless code, non-relational databases, JavaScript, Android Studio, and much more. This experience gave us exposure to a number of technologies we would've never seen otherwise, and we are all more capable because of it.
## What's next for Language Teacher
We have a number of ideas for improving and extending Language Teacher. We would like to make the conversational aspect of Language Teacher more natural. We would also like to have the capability to adjust the Action's behavior based on the student's level. Additionally, we would like to implement a visual interface that we were unable to implement with Android Things. Most importantly, an analyst of students performance and responses to better help teachers learn about the level of their students and how best to help them. | ## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants. | partial |
## Inspiration
Feeling major self-doubt when you first start hitting the gym or injuring yourself accidentally while working out are not uncommon experiences for most people. This inspired us to create Core, a platform to empower our users to take control of their well-being by removing the financial barriers around fitness.
## What it does
Core analyses the movements performed by the user and provides live auditory feedback on their form, allowing them to stay fully present and engaged during their workout. Our users can also take advantage of the visual indications on the screen where they can view a graph of the keypoint which can be used to reduce the risk of potential injury.
## How we built it
Prior to development, a prototype was created on Figma which was used as a reference point when the app was developed in ReactJs. In order to recognize the joints of the user and perform analysis, Tensorflow's MoveNet model was integrated into Core.
## Challenges we ran into
Initially, it was planned that Core would serve as a mobile application built using React Native, but as we developed a better understanding of the structure, we saw more potential in a cross-platform website. Our team was relatively inexperienced with the technologies that were used, which meant learning had to be done in parallel with the development.
## Accomplishments that we're proud of
This hackathon allowed us to develop code in ReactJs, and we hope that our learnings can be applied to our future endeavours. Most of us were also new to hackathons, and it was really rewarding to see how much we accomplished throughout the weekend.
## What we learned
We gained a better understanding of the technologies used and learned how to develop for the fast-paced nature of hackathons.
## What's next for Core
Currently, Core uses TensorFlow to track several key points and analyzes the information with mathematical models to determine the statistical probability of the correctness of the user's form. However, there's scope for improvement by implementing a machine learning model that is trained on Big Data to yield higher performance and accuracy.
We'd also love to expand our collection of exercises to include a wider variety of possible workouts. | ## Inspiration
To spread the joy of swag collecting.
## What it does
A Hack the North Simulator, specifically of the Sponsor Bay on the second floor. The player will explore sponsor booths, collecting endless amounts of swag along the way. A lucky hacker may stumble upon the elusive, exclusive COCKROACH TROPHY, or a very special RBC GOLD PRIZE!
## How we built it
Unity, Aseprite, Cakewalk
## What we learned
We learned the basics of Unity and git, rigidbody physics, integrating audio into Unity, and the creation of game assets. | ## Inspiration:
The need for an accessible workout tool that helps improve form and keep users engaged.
## What it does:
Gives real-time feedback on the user's form during workouts.
## How we built it
Tracking and Evaluation: This was coded in javascript and built using html. It incorporates a trained TensorFlow model call PoseNet to receive live data on "keypoints" in a video stream. The key points correspond to joints in the user's body. The keypoints motion and relative position is then used to evaluate the user's form.
## Challenges we ran into
Integrating machine learning with computer vision isn't simple, even when trying to use a pre-trained model. Some similar technologies are even more complex or require massive technology requirements (equivalent to ~$2,400 video card) so finding the correct model and platform for our application was critical and challenging.
## Accomplishments that we're proud of
This being the first hackathon for every member of the team, we are very proud of the learning we all achieved and the final product we were able to create. We learned so much about coding languages we were unfamiliar with (some members learned new languages from scratch), computer vision, machine learning, data models, and mobile UI/UX design. With our limited coding experience, we were able to research and persist through learning barriers and finish with something to show for it.
## What we learned
We learned how to track body movements using PoseNet and TensorFlow with Javascript, how to effectively use virtual environments to run the program locally, and how to communicate with the user through UI/UX on mobile devices.
## What's next for Trackout
There is a lot of potential for TrackOut to become a huge platform to host an amazing community of users wanting to improve their workout routine. By using recurring neural networks, TrackOut will be able to provide specific and meaningful feedback to help the user achieve high levels of form and consistency with their workouts. TrackOut will also have an extensive social aspect, connecting users by allowing them to share their own workouts and help each other by providing feedback in comment sections. Finally, Trackout seeks to collaborate with major YouTube and Instagram influencers within the existing online workout space, to bring a high volume of users, and to keep them actively involed in the community. | winning |
## Inspiration
The opioid crisis is a widespread danger, affecting millions of Americans every year. In 2016 alone, 2.1 million people had an opioid use disorder, resulting in over 40,000 deaths. After researching what had been done to tackle this problem, we came upon many pill dispensers currently on the market. However, we failed to see how they addressed the core of the problem - most were simply reminder systems with no way to regulate the quantity of medication being taken, ineffective to prevent drug overdose. As for the secure solutions, they cost somewhere between $200 to $600, well out of most people’s price ranges. Thus, we set out to prototype our own secure, simple, affordable, and end-to-end pipeline to address this problem, developing a robust medication reminder and dispensing system that not only makes it easy to follow the doctor’s orders, but also difficult to disobey.
## What it does
This product has three components: the web app, the mobile app, and the physical device. The web end is built for doctors to register patients, easily schedule dates and timing for their medications, and specify the medication name and dosage. Any changes the doctor makes are automatically synced with the patient’s mobile app. Through the app, patients can view their prescriptions and contact their doctor with the touch of one button, and they are instantly notified when they are due for prescriptions. Once they click on on an unlocked medication, the app communicates with LocPill to dispense the precise dosage. LocPill uses a system of gears and motors to do so, and it remains locked to prevent the patient from attempting to open the box to gain access to more medication than in the dosage; however, doctors and pharmacists will be able to open the box.
## How we built it
The LocPill prototype was designed on Rhino and 3-D printed. Each of the gears in the system was laser cut, and the gears were connected to a servo that was controlled by an Adafruit Bluefruit BLE Arduino programmed in C.
The web end was coded in HTML, CSS, Javascript, and PHP. The iOS app was coded in Swift using Xcode with mainly the UIKit framework with the help of the LBTA cocoa pod. Both front ends were supported using a Firebase backend database and email:password authentication.
## Challenges we ran into
Nothing is gained without a challenge; many of the skills this project required were things we had little to no experience with. From the modeling in the RP lab to the back end communication between our website and app, everything was a new challenge with a lesson to be gained from. During the final hours of the last day, while assembling our final product, we mistakenly positioned a gear in the incorrect area. Unfortunately, by the time we realized this, the super glue holding the gear in place had dried. Hence began our 4am trip to Fresh Grocer Sunday morning to acquire acetone, an active ingredient in nail polish remover. Although we returned drenched and shivering after running back in shorts and flip-flops during a storm, the satisfaction we felt upon seeing our final project correctly assembled was unmatched.
## Accomplishments that we're proud of
Our team is most proud of successfully creating and prototyping an object with the potential for positive social impact. Within a very short time, we accomplished much of our ambitious goal: to build a project that spanned 4 platforms over the course of two days: two front ends (mobile and web), a backend, and a physical mechanism. In terms of just codebase, the iOS app has over 2600 lines of code, and in total, we assembled around 5k lines of code. We completed and printed a prototype of our design and tested it with actual motors, confirming that our design’s specs were accurate as per the initial model.
## What we learned
Working on LocPill at PennApps gave us a unique chance to learn by doing. Laser cutting, Solidworks Design, 3D printing, setting up Arduino/iOS bluetooth connections, Arduino coding, database matching between front ends: these are just the tip of the iceberg in terms of the skills we picked up during the last 36 hours by diving into challenges rather than relying on a textbook or being formally taught concepts. While the skills we picked up were extremely valuable, our ultimate takeaway from this project is the confidence that we could pave the path in front of us even if we couldn’t always see the light ahead.
## What's next for LocPill
While we built a successful prototype during PennApps, we hope to formalize our design further before taking the idea to the Rothberg Catalyzer in October, where we plan to launch this product. During the first half of 2019, we plan to submit this product at more entrepreneurship competitions and reach out to healthcare organizations. During the second half of 2019, we plan to raise VC funding and acquire our first deals with healthcare providers. In short, this idea only begins at PennApps; it has a long future ahead of it. | ## Inspiration
The inspiration for PillHero was to create a complete solution to helping the sick and elderly to learn and stick too their schedule for taking medication.
## What it does
Using the iOS app, the user enters their medication schedule, which is composed of a start date, end date, daily timings, and the medication name. The user places their medication on the pill mount. As the time nears for the user to take their medication, the app notifies the user that they should take their medication. The pill mount detects when they take their medication. If the user does not, the app notifies them. Should the user not take the pill within a specified period after this, it is counted as them having missed taking their medication.
## How we built it
-- Hardware --
The pill mount was built out of three main components: the Raspberry Pi 3, an infrared LED, and an infrared phototransistor. The LED and phototransistor are used as a touch-less way of detecting when the user removes the pill bottle from the mount. If this was done close enough to when the user is supposed to take their pills, then the assumption is made that the user took their pills for that time slot.
-- Software --
On the software side there was a combination of front end as well as back end developments. For the front end, an iOS mobile application was used to gain input from the user as well as to present calendar information. The input method utilizes speech-to-text technology to translate user voice input to a text input. From there, the user is able to edit the textfield if any mistakes were made. Then to input is passed to an AI for parsing and understanding of the statement. That data is later is sent to a back end server that keeps track of the user's pill intake via communication with the Raspberry Pi 3 and communicating using the calendar screen on the iOS application.
## Challenges we ran into
one of the major challenges we encountered was trying to connect the raspberry pi, the phone and the server all together. Due to the security of the wifi, it was very difficult to ssh into either the raspberry pi or properly set up http requests between the devices. Our work around for this was to set up a LAN that let us connect all our devices and easily make requests between them
## Accomplishments that we're proud of
Completing a project that involves hardware has been a goal of multiple team members for a while and this hackathon gave them the opportunity to do that. In reality though, the team is most proud that they were able to create an internet of things project where the hardware and software complement each other rather than feel arbitrarily attached.
## What we learned
The entire team learned a significant amount about network programming from troubleshooting issues with the Pi 3 and trying to interface the server with both system components. Otherwise, the learning was different for each member. The members who had experience with hardware were exposed to working with servers and databases. Simultaneously, the members with full-stack experience gained exposure to working with hardware. This resulted in all members becoming more well-rounded developers and gaining experience that will aid them in future hackathons and their careers.
## What's next for PillHero
Post-hackathon, the team plans to continue development of PillHero into a more complete product. Numerous upgrades are planned like the ability to track non-daily medication routines and functionality with multiple drugs. On the iOS side in particular, implementation of push notifications and text message reminders are planned. As for the pill mount, the team plans to add an accurate to team plans to add an accurate analog scale to the device so it can notify the user when they are running out of pills. | ## Inspiration
Forgetting to take a medication is a common problem in older people and is especially likely when an older patient takes several drugs simultaneously. Around 40% of Patients forget to take medicines and most of them have difficulty to differentiate between multiple medications.
1. Patients forgets to take medication
2. Patients get confused in taking multiple medications as they look similar.
3. Patients don't take medications at correct time
Such thing leads to bad health and now-a-days these cases are increasing day by day and most people think that skipping medications is normal thing, which is bad for their health.
To avoid this I wanted to make a device which automatically gives medicines to the patients.
## What it does
It is a 3D printed Vending Box actuated by using servo motor and controlled using Raspberry Pi 4 and Mobile App. The Box will automatically sort out medicine according to Time and Schedule and Vend it using Servo Motor from the Box, the App will be connected via Firebase to Store, Add and Modify the Medicine Schedule and Raspberry Pi will collect that data from Firebase and Actuate the Servos Accordingly.
1. Doctors/Care Takers Can Add Medicine Schedule via the Mobile App.
2. The Patient will hear a Buzzer and Sound Details about Medicine and how much Doses he must take of that medicine.
3. The Box will automatically rotate via servo and Drop Off the Scheduled Medicine off the Box .
## How we built it
I bought Raspberry Pi and Servos at start of Hackathon, and Started Development of App until the Hardware got Shipped.
Timeline
* Day 1-2 : Planning of Idea and Hardware Ordering
* Day 3-4 : Research Work and Planning
* Day4 -6: App Development from Flutter + Firebase
* Day 7-9: CAD Design and started 3D Printing(took 13 Hours to print)
* Day 10: Hardware Arrived and Tested
* Day 10 -Day 12 Integrated 3D Printed Parts and Motors and Done with Project
* Day 13: Video Editing, Devpost Documentation , Code Uploading
* Day 14: Minor Video Edits and Documentation
#### Initial Research and CAD Drawings:
[](https://ibb.co/KWGMGhk)
[](https://ibb.co/cJQD8ss)
[](https://ibb.co/0ZrjBjp)
## Challenges we ran into
* 3D Printing Failed and took lot of time
* Late Shipment of Hardware
* Servo Motor Gear Issues
## Accomplishments that we're proud of
1. New to 3d Printing and Printed such Big Project for First Time
2. Used Raspberry Pi for First Time
## What we learned
* CAD Design
* Flutter
* Raspberry Pi
## What's next for MediBox
1. Creating 2 Level Box, for more medicine capacity using same amount of motors. [image]
2. Add good UI to App
3. Adding Medicine Recognition and automatically ordering . | winning |
## Inspiration
We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences.
## What it does
LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study.
## How we built it
We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis.
After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user.
We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference.
## Challenges we ran into
* Getting the eye-tracker to be properly calibrated (it was always a bit off than our view)
* Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!)
## Accomplishments that we're proud of
* Hearing the first audio playback of a translation was exciting
* Seeing the system work completely hands free while walking around the event venue was super cool!
## What we learned
* we learned about how to work within the limitations of the eye tracker
## What's next for LingoVision
One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary.
Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly. | ## Inspiration
**75% of adults over the age of 50** take prescription medication on a regular basis. Of these people, **over half** do not take their medication as prescribed - either taking them too early (causing toxic effects) or taking them too late (non-therapeutic). This type of medication non-adherence causes adverse drug reactions which is costing the Canadian government over **$8 billion** in hospitalization fees every year. Further, the current process of prescription between physicians and patients is extremely time-consuming and lacks transparency and accountability. There's a huge opportunity for a product to help facilitate the **medication adherence and refill process** between these two parties to not only reduce the effects of non-adherence but also to help save tremendous amounts of tax-paying dollars.
## What it does
**EZPill** is a platform that consists of a **web application** (for physicians) and a **mobile app** (for patients). Doctors first create a prescription in the web app by filling in information including the medication name and indications such as dosage quantity, dosage timing, total quantity, etc. This prescription generates a unique prescription ID and is translated into a QR code that practitioners can print and attach to their physical prescriptions. The patient then has two choices: 1) to either create an account on **EZPill** and scan the QR code (which automatically loads all prescription data to their account and connects with the web app), or 2) choose to not use EZPill (prescription will not be tied to the patient). This choice of data assignment method not only provides a mechanism for easy onboarding to **EZPill**, but makes sure that the privacy of the patients’ data is not compromised by not tying the prescription data to any patient **UNTIL** the patient consents by scanning the QR code and agreeing to the terms and conditions.
Once the patient has signed up, the mobile app acts as a simple **tracking tool** while the medicines are consumed, but also serves as a quick **communication tool** to quickly reach physicians to either request a refill or to schedule the next check-up once all the medication has been consumed.
## How we built it
We split our team into 4 roles: API, Mobile, Web, and UI/UX Design.
* **API**: A Golang Web Server on an Alpine Linux Docker image. The Docker image is built from a laptop and pushed to DockerHub; our **Azure App Service** deployment can then pull it and update the deployment. This process was automated with use of Makefiles and the **Azure** (az) **CLI** (Command Line Interface). The db implementation is a wrapper around MongoDB (**Azure CosmosDB**).
* **Mobile Client**: A client targeted exclusively at patients, written in swift for iOS.
* **Web Client**: A client targeted exclusively at healthcare providers, written in HTML & JavaScript. The Web Client is also hosted on **Azure**.
* **UI/UX Design**: Userflow was first mapped with the entire team's input. The wireframes were then created using Adobe XD in parallel with development, and the icons were vectorized using Gravit Designer to build a custom assets inventory.
## Challenges we ran into
* Using AJAX to build dynamically rendering websites
## Accomplishments that we're proud of
* Built an efficient privacy-conscious QR sign-up flow
* Wrote a custom MongoDB driver in Go to use Azure's CosmosDB
* Recognized the needs of our two customers and tailored the delivery of the platform to their needs
## What we learned
* We learned the concept of "Collections" and "Documents" in the Mongo(NoSQL)DB
## What's next for EZPill
There are a few startups in Toronto (such as MedMe, Livi, etc.) that are trying to solve this same problem through a pure hardware solution using a physical pill dispenser. We hope to **collaborate** with them by providing the software solution in addition to their hardware solution to create a more **complete product**. | ## Inspiration
The Arduino community provides a full eco-system of developing systems, and I saw the potential in using hardware, IOT and cloud-integration to provide a unique solution for streamlining processes for business.
## What it does
The web-app provides the workflow for a one-stop place to manage hundreds of different sensors by incorporating intelligence to each utility provided by the Arduino REST API. Imagine a health-care company that would need to manage all its heart-rate sensors and derive insights quickly and continuously on patient data. Or picture a way for a business to manage customer device location parameters by inputting customized conditions on the data or parameters. Or a way for a child to control her robot-controlled coffee machine from school. This app provides many different possibilities for use-cases.
## How we built it
I connected iPhones to the Arduino cloud, and built a web-app with NodeJS that uses the Arduino IOT API to connect to the cloud, and connected MongoDB to make the app more efficient and scalable. I followed the CRM architecture to build the app, and implemented the best practices to keep scalability in mind, since it is the main focus of the app.
## Challenges we ran into
A lot of the problems faced were naturally in the web application, and it required a lot of time.
## Accomplishments that we're proud of
I are proud of the app and its usefulness in different contexts. This is a creative solution that could have real world uses if the intelligence is implemented carefully.
## What we learned
I learned a LOT about web development, database management and API integration.
## What's next for OrangeBanana
Provided we have more time, we would implement more sensors and more use-cases for handling each of these. | winning |
## I have always been passionate about biotechnology and I decided to get immersed in assisted reproduction, I saw that when it comes to un vitro fertilization there is always a Lack of donors which make the process Harder both for clinics and for expected parents
## E-Donna seeks to be the leading sex cells provider for Latin American fertility clinics
## How it is built
Since it is a web Platform at first instance the code was created on html, we Designed the Interface for a MVP, After that we plan to use Twilio and API’s to keep improving our Service
## Accomplishments that I’m proud of
Creating an entrepreneurial Project of this type is sich a challenge, Specially Talking into Account i do Not have Great experience in Health care
## What I learned
I learned that Organization is key to success, it is not enough to Discovery a Problem and create a Solution. I do Need to compromise with Social responsibility
## What's next for E-Donna
Continue the Development of our idea in Mexico to Turn it into a Reality | ## Inspiration
I have realized how difficult and important it is for a women to maintain their health and hygiene. In a similar fashion, it is as equally difficult to maintain the health of the environment. My aim is to tackle these both problems with one digital solution that will allow women to not only improve their health hygiene but also do it in such a way that it is healthier to the environment.
## What it does
EcoCare allows girls all over the US to find a eco friendly alternatives for their period related supplies such as Menstrual Underwear, Heat Patch etc. Users can search for variety of these products, add it to their cart and shop online at a fairly cheaper rate. These products not only benefits the environment, but also improves the menstrual health of a woman.
## How we built it
I built it using ReactJS, Firebase, NodeJS and Stripe API. React and CSS was used to design and build the frontend. The backend was built using NodeJS and FireStore. Stripe was used for Payment and the application is deployed using Firebase.
## Challenges we ran into
* Deciding between Postgres and FireStore for database
* Collecting data to find these items
* Integration with Stripe API for payment processing
## Accomplishments that we're proud of
I was able to complete the frontend and also get it deployed working solo. I am also very proud of the fact that this app is built towards women health and hygiene by keeping environment healthy at the same time.
## What's next for Eco Care
This app can be turned up into a full fledged eco friendly health and hygiene app which will guide the unknowns about general facts and can raise awareness in the society. Women can share any experiences or questions with others related to eco friendly menstrual habits through the app platform. | ## Inspiration
This project was inspired by providing a solution to the problem of users with medical constraints. There are users who have mobility limitations which causes them to have difficulty leaving their homes. PrescriptionCare allows them to order prescriptions online and have them delivered to their homes on a monthly or a weekly basis.
## What it does
PrescriptionCare is a web app that allows users to order medical prescriptions online. Users can fill out a form and upload an image of their prescription in order to have their medication delivered to their homes. This app has a monthly subscription feature since most medications is renewed after 30 days, but also allows users to sign up for weekly subscriptions.
## How we built it
We designed PrescriptionCare using Figma, and built it using Wix.
## Challenges we ran into
We ran into several challenges, mainly due to our inexperience with hackathons and the programs and languages we used along the way. Initially, we wanted to create the website using HTML, CSS, and JavaScript, however, we didn't end up going down that path, as it ended up being a bit too complicated because we were all beginners. We ended up choosing to use Wix due to its ease of use and excellent template selection to give us a solid base to build PrescriptionCare off of. We also ran into issues with an iOS app we tried to develop to complement the website, mainly due to learning Swift and SwiftUI, which is not very beginner friendly.
## Accomplishments that we're proud of
Managing to create a website in just a few hours and being able to work with a great team. Some the members of this team also had to learn new software in just a few hours which was also a challenge, but this experience is a good one and we'll be much more prepared for our next hackathon.
We are proud to have experimented with two new tools thanks to this application. We were able to draft a website through Wix and create an app through xCode and SwiftUI. Another accomplishment is that our team consists of first-time hackers, so we are proud to have started the journey of hacking and cannot wait to see what is waiting for us in the future.
## What we learned
We learned how to use the Wix website builder for the first time and also how to collaborate together as a team. We don't really know each other and happened to meet at the competition and will probably work together at another hackathon in the future.
We learned a positive mindset is another important asset to bring into a hackathon. At first, we felt intimidated by hackathons, but we are thankful to have learned that hackathons can be fun and a priceless learning experience.
## What's next for PrescriptionCare
It would be nice to be able to create a mobile app so that users can get updates and notifications when their medication arrives. We could create a tracking system that keeps track of the medication you take and estimates when the user finishes their medication.
PrescriptionCare will continue to expand and develop its services to reach more audience. We hope to bring more medication and subscription plans for post-secondary students who live away from home, at-home caretakers, and more, and we aim to bring access to medicine to everyone. Our next goal is to continue developing our website and mobile app (both android and IOS), as well as collect data of pharmaceutical drugs and their usage. We hope to make our app a more diverse, and inclusive app, with a wide variety of medication and delivery methods. | losing |
# Are You Taking
It's the anti-scheduling app. 'Are You Taking' is the no-nonsense way to figure out if you have class with your friends by comparing your course schedules with ease. No more screenshots, only good vibes!
## Inspiration
The fall semester is approaching... too quickly. And we don't want to have to be in class by ourselves.
Every year, we do the same routine of sending screenshots to our peers of what we're taking that term. It's tedious, and every time you change courses, you have to resend a picture. It also doesn't scale well to groups of people trying to find all of the different overlaps.
So, we built a fix. Introducing "Are You Taking" (AYT), an app that allows users to upload their calendars and find event overlap.
It works very similar to scheduling apps like when2meet, except with the goal of finding where there *is* conflict, instead of where there isn't.
## What it does
The flow goes as follows:
1. Users upload their calendar, and get a custom URL like `https://areyoutaking.tech/calendar/<uuidv4>`
2. They can then send that URL wherever it suits them most
3. Other users may then upload their own calendars
4. The link stays alive so users can go back to see who has class with who
## How we built it
We leveraged React on the front-end, along with Next, Sass, React-Big-Calendar and Bootstrap.
For the back-end, we used Python with Flask. We also used CockroachDB for storing events and handled deployment using Google Cloud Run (GCR) on GCP. We were able to create Dockerfiles for both our front-end and back-end separately and likewise deploy them each to a separate GCR instance.
## Challenges we ran into
There were two major challenges we faced in development.
The first was modelling relationships between the various entities involved in our application. From one-to-one, to one-to-many, to many-to-many, we had to write effective schemas to ensure we could render data efficiently.
The second was connecting our front-end code to our back-end code; we waited perhaps a bit too long to pair them together and really felt a time crunch as the deadline approached.
## Accomplishments that we're proud of
We managed to cover a lot of new ground!
* Being able to effectively render calendar events
* Being able to handle file uploads and store event data
* Deploying the application on GCP using GCR
* Capturing various relationships with database schemas and SQL
## What we learned
We used each of these technologies for the first time:
* Next
* CockroachDB
* Google Cloud Run
## What's next for Are You Taking (AYT)
There's a few major features we'd like to add!
* Support for direct Google Calendar links, Apple Calendar links, Outlook links
* Edit the calendar so you don't have to re-upload the file
* Integrations with common platforms: Messenger, Discord, email, Slack
* Simple passwords for calendars and users
* Render a 'generic week' as the calendar, instead of specific dates | ## Inspiration
Many people on our campus use an app called When2Meet to schedule meetings, but their UI is terrible, their features are limited, and overall we thought it could be done better. We brainstormed what would make When2Meet better and thought the biggest thing would be a simple new UI as well as a proper account system to see all the meetings you have.
## What it does
Let's Meet is an app that allows people to schedule meetings effortlessly. "Make an account and make scheduling a breeze." A user can create a meeting and share it with others. Then everyone with access can choose which times work best for them.
## How we built it
We used a lot of Terraform! We really wanted to go with a serverless microservice architecture on AWS and thus chose to deploy via AWS. Since we were already using lambdas for the backend, it made sense to add Amplify for the frontend, Cognito for logging in, and DynamoDB for data storage. We wrote over 900 lines of Terraform to get our lambdas deployed, api gateway properly configured, permissions correct, and everything else we do in AWS configured. Other than AWS, we utilized React with Ant Design components. Our lambdas ran on Python 3.12.
## Challenges we ran into
The biggest challenge we ran into was a bug with AWS. For roughly 5 hours we fought intermittent 403 responses. Initially we had an authorizer on the API gateway, but after a short time we removed it. We confirmed it was deleting by searching the CLI for it. We double checked in the web console because we thought it may be the authorizer but it wasn't there anyway. This ended up requiring everything to be manually deleted around the API gate way and everything have to be rebuilt. Thanks to Terraform it made restoring everything relatively easy.
Another challenge was using Terraform and AWS itself. We had almost no knowledge of it going in and coming out we know there is so much more to learn, but with these skills we feel confident to set up anything in AWS.
## Accomplishments that we're proud of
We are so proud of our deployment and cloud architecture. We think that having built a cloud project of this scale in this time frame is no small feat. Even with some challenges our determination to complete the project helped us get through. We are also proud of our UI as we continue to strengthen our design skills.
## What we learned
We learned that implementing Terraform can sometimes be difficult depending on the scope and complexity of the task. This was our first time using a component library for frontend development and we now know how to design, connect, and build an app from start to finish.
## What's next for Let's Meet
We would add more features such as syncing the meetings to a Google Calendar. More customizations and features such as location would also be added so that users can communicate where to meet through the web app itself. | ## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms. | partial |
## Inspiration
Youtube is after music bots on Discord. So we are here to fill the void. Introducing Treble, the next generation of Discord music bots!
## What it does
Just like Groovy and Rhythm, Treble will play music from YouTube. Just invite the bot to your server and type -play {Youtube URL} to play music. Alternatively, if you don’t want to find the url, you can simply type -play {song title} and Treble will scrape Youtube and find the closest match using a sophisticated natural language processing algorithm with optimized query prediction.
## How we built it
Treble is built using the Python programming language. Primarily, the discord.py was used to facilitate communications with the Discord API. Beautiful soap aided in scraping Youtube to find urls.
## Challenges we ran into
The biggest challenge we ran into was setting up ffmpeg. Essentially, ffmpeg is a complete, cross-platform solution to record, convert and stream audio and video. The issue was that setting up ffmpeg on a Windows operating system required more steps than on Mac or Linux. We identified this problem early on thanks to the detailed documentation of discord.py. This was ultimately fixed by the manipulation of system variables and shell resets.
## Accomplishments that we're proud of
We are impressed with our resolute dedication to finishing this project within the rather short time frame allocated. Additionally, we are proud of the thorough logging capabilities of the application.
## What we learned
We learned about the API’s used for developing Discord bots, as well as how to web scrape to find music links based on keywords.
## What's next for Treble
We would like to implement support for other music or video streaming sites, such as Spotify and Soundcloud. | ## Inspiration
As music lovers, we found that certain songs trigger fond memories of the past. To capture that nostalgic essence, we developed a Retro AI BoomBox. This unique tool helps users rediscover tunes that transport them back in time, all while merging the charm of the olden years with modern technology.
## What it does
Our AI BoomBox is ultimately a music guru! BoomBot is equipped with tools to help users find new music through either a) a conversation with BoomBot with help from Cohere API, or b) accessing the user's Spotify data via Spotify API to curate a nostalgic playlist for the user based on their listening history.
## How we built it
We designed the user interface using Figma, heavily considering elements such as font, colour, and images to evoke that nostalgic feeling. We built the front-end using React.js and CSS, while our backend composed of MongoDB and Flask. The Cohere API was essential in creating BoomBot's chat feature, and the Spotify API gave us the necessary data to recommend users nostalgic songs based on their listening history.
## Challenges we ran into
Merge conflicts: Throughout our development process, we encountered challenges with merge conflicts, where multiple developers attempted to modify the same code simultaneously. Resolving these conflicts required careful coordination and communication among team members to ensure that all changes were integrated smoothly.
Problems with our original tech stack (Taipy): Initially, we faced issues with our chosen technology stack, Taipy. These challenges ranged from compatibility issues to limitations in functionality, as well as lack of documentation, prompting us to reevaluate our approach and make necessary adjustments to ensure the smooth progress of our project.
How to work and participate in a hackathon (this is our first in-person hackathon!): As first-time participants in an in-person hackathon, we encountered a learning curve in understanding how to effectively collaborate within the intense and time-constrained environment of a hackathon. Overcoming this challenge involved adapting quickly to the fast-paced nature of the event, prioritizing tasks, and leveraging each team member's strengths to maximize productivity and innovation.
The slowness of Azure and Cohere: Throughout the development process, we experienced delays due to the sluggish performance of Azure and Cohere, hindering our ability to efficiently execute certain tasks and slowing down our overall progress. Despite these challenges, we employed strategies such as optimizing our workflows and seeking alternative solutions to mitigate the impact of these performance issues and keep our project on track.
## Accomplishments that we're proud of
We finished our project with the goals we had in mind!: One of our proudest accomplishments is successfully completing our project while achieving the specific objectives we set out to accomplish. Despite the challenges and obstacles we faced along the way, our team remained focused and determined, ultimately delivering a product that aligns with our initial vision and goals.
Experimented and used up to 10 different technologies: Throughout the development process, we embraced a spirit of experimentation and innovation by incorporating and leveraging up to 10 different technologies. This breadth of exploration not only expanded our technical expertise but also allowed us to discover new tools and approaches that enhanced the functionality and robustness of our project.
Text to speech feature: One standout achievement of our project is the successful implementation of a text-to-speech feature. This functionality not only adds value to our product but also demonstrates our team's ability to integrate advanced capabilities into our project, enhancing its usability and accessibility for users.
Expanded our skills and built experiences for our resume: Engaging in this project provided us with invaluable opportunities to expand our skill sets and gain practical experience, enriching our resumes and bolstering our professional profiles. From honing technical proficiencies to refining collaboration and problem-solving skills, each team member has grown personally and professionally through their contributions to the project.
Developed the front-end close to its original design: We take pride in our meticulous attention to detail and dedication to delivering a front-end interface that closely resembles its original design. By prioritizing user experience and adhering to design principles, we ensured that our project not only functions seamlessly but also boasts an aesthetically pleasing and intuitive user interface.
## What we learned
Using Spotify API with Python: We gained proficiency in leveraging the Spotify API with Python, allowing us to access and manipulate Spotify's vast music database for our project. This experience deepened our understanding of API integration and expanded our capabilities in working with external data sources.
Flask: Through our project development, we familiarized ourselves with Flask, a lightweight and flexible Python web framework. Learning Flask enabled us to build web applications efficiently and effectively, providing us with a valuable toolset for future web development projects.
Setting up GitHub Actions with Azure: We acquired knowledge and skills in configuring GitHub Actions to automate our workflow processes, coupled with deploying our application on the Azure cloud platform. This integration streamlined our development pipeline and enhanced collaboration among team members, reinforcing the importance of automation in modern software development practices.
React: Our experience with React, a popular JavaScript library for building user interfaces, allowed us to create dynamic and interactive front-end components for our project. By mastering React, we gained insights into component-based architecture and learned best practices for building scalable and maintainable web applications.
Figma: We explored Figma, a collaborative interface design tool, to create and prototype the visual elements of our project. Working with Figma enhanced our design skills and facilitated seamless collaboration among team members, enabling us to iterate and refine our design concepts efficiently throughout the development process.
## What's next for BoomBot
Inclusion of users without Spotify accounts: We plan to expand the accessibility of BoomBot by accommodating users who do not have a Spotify account. This could involve integrating additional music streaming services or providing alternative methods for accessing and enjoying the features offered by BoomBot.
Responsiveness: Enhancing the responsiveness of BoomBot across various devices and screen sizes is a priority. By optimizing the user experience for desktops, tablets, and mobile devices, we aim to ensure that BoomBot remains functional and visually appealing regardless of the platform used.
Multi-language support: To cater to a diverse user base, we intend to implement multi-language support within BoomBot. This will involve translating the user interface and content into multiple languages, allowing users from different regions and linguistic backgrounds to interact with BoomBot in their preferred language.
Attention to small details like animations and transitions: We recognize the importance of refining the user experience by paying attention to small details such as animations and transitions. Adding subtle yet engaging animations and transitions throughout the interface will elevate the overall look and feel of BoomBot, contributing to a more immersive and enjoyable user experience. | ## Inspiration
```
We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do.
```
## What it does
```
Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams.
```
## How we built it
```
We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application.
```
## Challenges we ran into
```
This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application!
```
## What we learned
```
We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers.
```
## What's next for Discotheque
```
If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music.
``` | losing |
## Inspiration
We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere!
## What it does
Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site.
## How we built it
We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches.
## Challenges we ran into
## Accomplishments that we're proud of
Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time.
## What we learned
## What's next for Runway | # cheqout
A project for QHacks 2018 by [Angelo Lu](https://github.com/angelolu/), [Zhijian Wang](https://github.com/EvW1998/), [Hayden Pfeiffer](https://github.com/PfeifferH/) and [Ian Wang](https://github.com/ianw3214/). Speeding up checkouts at supermarkets by fitting existing carts with tech enabling payments.
### Targeted Clients: Established supermarket chains
Big box supermarkets with established physical locations, large inventories and high traffic, such as Walmart and Loblaw-branded stores.
### Problems Identified: Slow checkouts
Checkout times can be slow because they are dependent on available checkout lanes and influenced by factors such as the number of items of the people in front of you and speed of the cashier and the person in scanning and paying. Stores with consistently slow checkouts are at risk of losing customers as people look for faster options such as other nearby stores or online shopping.
### Possible Solutions
1. **Additional checkout lanes**
This solution is limited by the physical layout of the store. Additional checkout lanes require additional staff.
2. **Self-checkout kiosks**
This solution is also limited by the physical layout. Lines can still develop as people must wait for others to scan and pay, which may take a long time.
3. **Handheld self-scanners**
In supermarkets, products such as produce, and bulk foods are weighed, which cannot be processed easily with this solution.
4. **Sensor-fusion based solution replacing checkouts (Ex. Amazon Go)**
In large chain supermarkets with many established locations, implementation of all the cameras, scales and custom shelving for the system requires massive renovations and is assumed to be impractical, especially considering the number of products and stores.
### Proposed Solution: “cheqout”, Smart Shopping Carts
The solution, titled “cheqout” involves fitting existing shopping carts with a tray, covering the main basket of the cart, and a touchscreen and camera, on the cart handle.
A customer would take out a cart upon entry and scan each items’ barcode before placing it in the cart. The touchscreen would display the current total of the cart in real time. Once the customer is ready to pay, they can simply tap “Check Out” on the touchscreen and scan a loyalty card (virtual or physical) with an associated payment method or tap their credit or debit card directly. Alternatively, the customer can proceed to a payment kiosk or traditional checkout if they do not have an account, are paying in cash or want a physical receipt, without having to wait for each item to be scanned again.
On the way out, there would be an employee with a handheld reader displaying what has been paid in the cart to do a quick visual inspection.
This solution trusts that most users will not steal products, however, a scale integrated in the tray will continuously monitor the weight of products in the cart and the changes in weight associated with an item’s addition/removal and prompt accordingly. The scale will also be used to weigh produce.
This solution allows for the sale of weighed and barcoded items while still decreasing checkout line congestion. It is scalable to meet the requirements of each individual store and does not require the hiring of many additional personnel.
### Challenges
* Time: This project is being built for QHacks, a 36-hour hackathon
* Technical: The scale and touch screen remain uncooperative, prompting temporary fixes for the purpose of demonstration
### Possible Future Features/Value-Added Features
* Information about traffic within the store
By implementing indoor location tracking, analysts can visualize where customers frequent and tailor/adjust product placement accordingly
* Information about product selection
The system can record a customer’s decision making process and uncertainties based on how long they spend in one spot and if they add, remove or swap items from their cart | ## Inspiration
A common skill one needs in business management is the ability to know how the customer feels and reacts to a system of services provided by the business in question. Thus, having computers in this day and age make it an essential tool for analyzing these important sources of customer feedback. Automatically making a machine gather uncoerced customer "feedback" data can easily indicate how the last few interactions were for the customer. Making this tool accessible was our inspiration behind this project.
## What it does
This web application gathers data from Twitter, Reddit, Kayak, TripAdvisor and Influenster at the moment with room to expand into many more social review websites. The data it gathers from these websites are represented as graphs, ratios and other symbolic representations that help the user easily conclude how the company is perceived by its customers and even compare it to how customers perceive other airline companies as well.
## How we built it
We built it using languages and packages we were familiar with, along with packages we did not know existed before yHacks 2019. An extremely careful design process was laid out well before we started working on the implementation of the webApp and we believe that is the reason behind its simplicity for the user. We prioritized making the implementation as simple as possible such that any user can easily understand the observations of the data.
## Challenges we ran into
Importing and utilizing some packages did not play well with our implementation process, thus we had to make sure we covered our design checklist via working around the issues we ran into. This includes building data scrapers, data representers and other packages from scratch. This issue increasing became prominent the more we pressed on making the webApp user-friendly as more functions and code had to be shoveled in the back-end.
## Accomplishments that we're proud of
The data scrapers and representative models for collected data are accomplishments we're most proud of as they are simple yet extremely effective when it comes to analyzing customer feedback. In particular, getting data from giant resources of customer reactions such as TripAdvisor, Reddit and Twitter make the application highly relevant and effective. This practical idea and ease of access development we implemented for the user is what we are most proud of.
## What we learned
We learned a lot more about several of the infinite number of packages available online. There is so much information out on the internet that these 2 continuous days of coding and research have not even scratched the surface in terms of all the implementable ideas out there. Our implementation is just a representation of what a final sentiment analyzer could look like. Given there are many more areas to grow upon, we learned about customer feedback analysis and entrepreneur skills along the way.
## What's next for feelBlue
Adding more sources of data such as FaceBook, Instagram and other large social media websites will help increase the pool of data to perform sentiment analysis. This implementation can even help high-level managers of JetBlue decide which area of service they can improve upon! Given enough traction and information, feelBlue could even be used as a universal sentiment analyzer for multiple subjects alongside JetBlue Airlines! The goals are endless! | partial |
## Inspiration
We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers.
Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level.
## What is does
kizuna has aspirations to implement the following features:
* Livestream and upload videos
* View videos (both on a big screen and in a small mini-player for multitasking)
* Interact with friends (on stream, in a private chat, or in public chat)
* View activities of friends
* Highlights smaller, local, and upcoming streamers
## How we built it
Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network.
The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection.
## Challenges we ran into
We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project!
You can discover more about Livepeer's API [here](https://livepeer.org/).
## Accomplishments that we're proud of
Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project.
## What we learned
Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project.
The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS.
The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions.
## What's next for kizuna
Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application:
* Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind
* Implementing Lens Protocol to create a unified social community in our application
* Redesign some small aspects of each page
* Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream. | ## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on. | ## Inspiration
We love spending time playing role based games as well as chatting with AI, so we figured a great app idea would be to combine the two.
## What it does
Creates a fun and interactive AI powered story game where you control the story and the AI continues it for as long as you want to play. If you ever don't like where the story is going, simply double click the last point you want to travel back to and restart from there! (Just like in Groundhog Day)
## How we built it
We used Reflex as the full-stack Python framework to develop an aesthetic frontend as well as a robust backend. We implemented 2 of TogetherAI's models to add the main functionality of our web application.
## Challenges we ran into
From the beginning, we were unsure of the best tech stack to use since it was most members' first hackathon. After settling on using Reflex, there were various bugs that we were able to resolve by collaborating with the Reflex co-founder and employee on site.
## Accomplishments that we're proud of
All our members are inexperienced in UI/UX and frontend design, especially when using an unfamiliar framework. However, we were able to figure it out by reading the documentation and peer programming. We were also proud of optimizing all our background processes by using Reflex's asynchronous background tasks, which sped up our website API calls and overall created a much better user experience.
## What we learned
We learned an entirely new but very interesting tech stack, since we had never even heard of using Python as a frontend language. We also learned about the value and struggles that go into creating a user friendly web app we were happy with in such a short amount of time.
## What's next for Groundhog
More features are in planning, such as allowing multiple users to connect across the internet and roleplay on a single story as different characters. We hope to continue optimizing the speeds of our background processes in order to make the user experience seamless. | winning |
## Inspiration
During the pandemic music has been a relief for anxiety. It helps calm you down
## What it does
It allows the user to click on a coloured tile that will then play a sound. Mix and match these sounds
## How we built it
Using Android studio with dart/flutter programming
## Challenges we ran into
coming up with good packages to use and switching between notes
## Accomplishments that we're proud of
Making the app functional to some extent
## What we learned
How to simple apps with dart
## What's next for Note Tiles | ## Inspiration
We all love to listen to music, and we wanted to build something that reflected what our favorite music apps lack! For example, we wanted our users to have a little background information as well as be able to easily access playlists without having to search for them each time.
## What it does
We created a website that's both informational and interactive, and users can listen to music based on their current mood.
## How I built it
We built it using Javascript for the API integration, and HTML/CSS for the website visuals.
## Challenges I ran into
We had issues with the layout but were able to resolve it after fixing our embedded Spotify.
## Accomplishments that I'm proud of
We're proud that we were able to get a working website up in just a day, and we all learned lots of new things!
## What I learned
Some of us didn't know some of the languages that we were working with, but we were able to get the hang of it with the help of our teammates.
## What's next for Mood Music
We would love to add more similar functionality like perhaps playing music based on the weather or what's popular in certain world locations. | ## Inspiration
```
We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do.
```
## What it does
```
Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams.
```
## How we built it
```
We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application.
```
## Challenges we ran into
```
This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application!
```
## What we learned
```
We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers.
```
## What's next for Discotheque
```
If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music.
``` | losing |
## Inspiration
The inspiration behind Shifa comes from the desire to simplify medication management for the elderly and individuals who may struggle with deciphering medical labels and understanding complex prescription instructions. We recognized the need to leverage technology to improve medication adherence and enhance understanding of medication details. The name Al Shifa comes from the arabic word healing or remedy and it captures our desire to push a new wave of technological advancement towards prioritizing the health and healing of individuals.
## What it does
Shifa uses Optical Character Recognition (OCR) technology to extract text from prescription labels and medical pill bottles. It then utilizes the OpenAI API to generate simple and straightforward descriptions of the medication and its uses. This means that Shifa takes the complicated jargon often found on medication labels and translates it into easily understandable language.
The primary function of Shifa is to help the elderly and anyone managing multiple medications to stay organized and informed about their treatment. It simplifies the process of medication management, ensuring that users have a clear understanding of what each medication is for and how to take it.
## How we built it
* **OCR**: We used image preprocessing and the OCR package Tesseract to scan and extract text from images of medical pill bottles and prescription labels.
* **OpenAI API**: We prompt-engineered the davinci model from the OpenAI API to parse through the extracted OCR text and generate simplified descriptions of the medication and its uses.
* **React**: We used React to create an intuitive and user-friendly frontend interface. This allows users to easily interact with the application and receive the information they need.
* **Flask**: Flask was employed to link the OCR and OpenAI backend with the React frontend, ensuring seamless communication and functionality.
## Challenges we ran into
* **OCR Accuracy**: Ensuring accurate text extraction from images posed a significant challenge, as prescription labels and pill bottles can vary in appearance and quality.
* **Integration**: Integrating multiple technologies and ensuring they work together smoothly required careful planning and testing.
## Accomplishments that we're proud of
* Developing a robust davinci model through prompt-engineering and parsing algorithms that can convert complex medical information into user-friendly language.
* Creating an easy-to-use and visually appealing frontend with React.
* Building a functional and reliable application that has the potential to significantly improve medication management for the elderly.
## What we learned
* The power of NLP in making complex medical information accessible to a wider audience.
* The importance of user-friendly design and interface in healthcare applications.
* The potential impact of technology in improving the lives of seniors and individuals with complex medication regimens.
## What's next for Shifa
The future of Shifa holds exciting possibilities, and we have designed a Figma demo to present these future enhancements.
* **Add Audio**: We hope to add audio corresponding to each scan to increase the accessibility of our app.
* **Enhanced Accuracy**: We aim to improve the accuracy of text extraction from images through further refinement of the OCR technology.
* **Expanded Medication Database**: Shifa can be enhanced by incorporating a comprehensive database of medications, providing users with more detailed information about their prescriptions.
* **Personalized Reminders**: Implementing medication reminder features to help users stay on track with their treatment plans.
* **Integration with Healthcare Providers**: Exploring options to connect Shifa with healthcare providers to ensure seamless communication and medication management.
* **Mobile App**: Developing a mobile version of Shifa for even greater accessibility and convenience.
* **Medication Scheduling**: Introducing a calendar feature that allows users to schedule when medications need to be taken. This feature will provide timely reminders, ensuring that users never miss a dose and helping them adhere to their treatment plans more effectively. | ## Inspiration
The idea for IngredientAI came from my personal frustration with navigating complex and often misleading ingredient labels on beauty and personal care products. I realized that many consumers struggle to understand what these ingredients actually do and whether they are safe. The lack of accessible information often leaves people in the dark about what they are using daily. I wanted to create a tool that brings transparency to this process, empowering users to make healthier and more informed choices about the products they use.
## What it does
The idea for IngredientAI was born out of the frustration of navigating complex and often misleading ingredient labels on beauty and personal care products. I realized that many consumers struggle to understand what these ingredients actually do and whether they are safe. The lack of accessible information often leaves people in the dark about what they are using daily. I wanted to create a tool that brings transparency to this process, empowering users to make healthier and more informed choices about the products they use.
## How I built It
The frontend is built using React Native and ran using Expo. Users interact with a FlatList component that accesses a backend database powered by Convex. Text extracted from images as well as generates ingredient descriptions is all done through OpenAI's gpt-4o-mini large-language model.
## Challenges I ran into
A big challenge that I come across was figuring out how to extract text from images. Originally, I planned on setting up a server-side script that makes use of Tesseract.js's OCR capabilities. However, after some testing, I realized that I did not have enough time to fine tune Tesseract so that it extracts text from images under a variety of different lighting. For IngredientAI to be used by consumers, it must be able to work under a wide variety of circumstances. To solve this issue, I decided it would be best for me to use OpenAI's new Vision capabilities. I did not go with this originally because I wanted to minimize the amount of OpenAI API calls I made. However, under time constraints, this was the best option.
## Accomplishments that I'm proud of
I am extremely proud of how far my App Development has come. At a previous hackathon in March, I had used React Native for the very first time. At that hackathon, I was completely clueless with the technology. A lot of my code was copy/pasted from ChatGPT and I did not have a proper understanding of how it worked. Now, this weekend, I was able to create a fully functional mobile application that has organized (enough) code that allows me to expand on this project in the future.
## What I learned
Every hackathon, my goal is to learn at least one new technology. This weekend, I decided to use Convex for the very first time. I really appreciated the amount of resources that Convex provides for learning their technology. It was especially convenient that they had a dedicated page for hackathon projects. It made setting up my database extremely fast and convenient, and as we know, speed is key in a hackathon.
## What's next for IngredientAI
My aim is to eventually bring IngredientAI to app stores. This is an app I would personally find use for, and I would like to share that with others. Future improvements and features include:
* Categorization and visualization of ingredient data
* Suggested products section
* One-button multi-store checkout
I hope you all get the chance to try out IngredientAI in the near future! | ## Inspiration
The opioid crisis is a widespread danger, affecting millions of Americans every year. In 2016 alone, 2.1 million people had an opioid use disorder, resulting in over 40,000 deaths. After researching what had been done to tackle this problem, we came upon many pill dispensers currently on the market. However, we failed to see how they addressed the core of the problem - most were simply reminder systems with no way to regulate the quantity of medication being taken, ineffective to prevent drug overdose. As for the secure solutions, they cost somewhere between $200 to $600, well out of most people’s price ranges. Thus, we set out to prototype our own secure, simple, affordable, and end-to-end pipeline to address this problem, developing a robust medication reminder and dispensing system that not only makes it easy to follow the doctor’s orders, but also difficult to disobey.
## What it does
This product has three components: the web app, the mobile app, and the physical device. The web end is built for doctors to register patients, easily schedule dates and timing for their medications, and specify the medication name and dosage. Any changes the doctor makes are automatically synced with the patient’s mobile app. Through the app, patients can view their prescriptions and contact their doctor with the touch of one button, and they are instantly notified when they are due for prescriptions. Once they click on on an unlocked medication, the app communicates with LocPill to dispense the precise dosage. LocPill uses a system of gears and motors to do so, and it remains locked to prevent the patient from attempting to open the box to gain access to more medication than in the dosage; however, doctors and pharmacists will be able to open the box.
## How we built it
The LocPill prototype was designed on Rhino and 3-D printed. Each of the gears in the system was laser cut, and the gears were connected to a servo that was controlled by an Adafruit Bluefruit BLE Arduino programmed in C.
The web end was coded in HTML, CSS, Javascript, and PHP. The iOS app was coded in Swift using Xcode with mainly the UIKit framework with the help of the LBTA cocoa pod. Both front ends were supported using a Firebase backend database and email:password authentication.
## Challenges we ran into
Nothing is gained without a challenge; many of the skills this project required were things we had little to no experience with. From the modeling in the RP lab to the back end communication between our website and app, everything was a new challenge with a lesson to be gained from. During the final hours of the last day, while assembling our final product, we mistakenly positioned a gear in the incorrect area. Unfortunately, by the time we realized this, the super glue holding the gear in place had dried. Hence began our 4am trip to Fresh Grocer Sunday morning to acquire acetone, an active ingredient in nail polish remover. Although we returned drenched and shivering after running back in shorts and flip-flops during a storm, the satisfaction we felt upon seeing our final project correctly assembled was unmatched.
## Accomplishments that we're proud of
Our team is most proud of successfully creating and prototyping an object with the potential for positive social impact. Within a very short time, we accomplished much of our ambitious goal: to build a project that spanned 4 platforms over the course of two days: two front ends (mobile and web), a backend, and a physical mechanism. In terms of just codebase, the iOS app has over 2600 lines of code, and in total, we assembled around 5k lines of code. We completed and printed a prototype of our design and tested it with actual motors, confirming that our design’s specs were accurate as per the initial model.
## What we learned
Working on LocPill at PennApps gave us a unique chance to learn by doing. Laser cutting, Solidworks Design, 3D printing, setting up Arduino/iOS bluetooth connections, Arduino coding, database matching between front ends: these are just the tip of the iceberg in terms of the skills we picked up during the last 36 hours by diving into challenges rather than relying on a textbook or being formally taught concepts. While the skills we picked up were extremely valuable, our ultimate takeaway from this project is the confidence that we could pave the path in front of us even if we couldn’t always see the light ahead.
## What's next for LocPill
While we built a successful prototype during PennApps, we hope to formalize our design further before taking the idea to the Rothberg Catalyzer in October, where we plan to launch this product. During the first half of 2019, we plan to submit this product at more entrepreneurship competitions and reach out to healthcare organizations. During the second half of 2019, we plan to raise VC funding and acquire our first deals with healthcare providers. In short, this idea only begins at PennApps; it has a long future ahead of it. | losing |
## Inspiration
As of today, over a third of the US population is over the age of 50 and this number is rapidly rising. One of the biggest issues facing today's elderly is an inability to understand and effectively utilize modern technology such as the Internet. As they did not grow up with these tools, many seniors find technology to be intimidating and become averse to its usage entirely. As a result, the elderly population suffers from an inhibited access to information and services, limiting their autonomy. This can be particularly impactful when seniors are unable to access important health-related information. For example, a senior may be experiencing chest pain and want to learn more about their condition. Simply searching "chest pain" on Google returns over three billion search results, many of which are long articles about general chest pain and the possible causes, symptoms, and treatment options. While this may be helpful and digestible for someone who has grown up with the Internet, for the elderly, such a wide scope of information may be daunting and difficult to parse through. There are so many different possible causes for chest pain and each has its own set of symptoms and conditions. It would be much more effective for them to include in their search query more specifics about their situation (ie, 'sharp upper chest pain lasting for three weeks'). We hope to help them leverage the Internet in the ways that they would like, but don't know how to.
## What it does
Rondo is an AI-powered search-engine tool that makes the Internet truly accessible for the elderly population. Rondo takes in a query input from the user like any other regular old search engine. However, based off the specificity of the query, Rondo will prompt the user to answer a specific, well defined multiple choice question related to the original query in order to generate a more specific, better defined search query with more tailored results. Along with the question prompting, Rondo also automatically summarizes the top ten search results from the current query in order to make the content of each link more digestible to the user, who can read the brief summaries and decide which link to click on. These two functionalities run in parallel with question prompting on the left half of the screen and the article summaries on the right hand side. This way, the user can see what the current search query returns and can click on an article they find to be appropriate at any time. If the search results are still too broad or not what they are looking for, they can continue along with the prompted question to generate a better search query. Once the query reaches a certain level of specificity, Rondo will stop asking follow up questions.
## How we built it
For the frontend, we used Reflex -- an open-source framework for building web applications in pure python.
For the backend, we leveraged Python along with the OpenAI API, constructing prompts to elicit relevant responses, and dynamically updating queries based on iterative user input.
## Challenges we ran into
* Fine tuning GPT-4 for optimal query updating and follow-up question generation required persistent experimentation and tactical prompt engineering.
* Balancing the efficiency needs of a search engine with the processing time of our openai summarization tool.
* Integrating free-form GPT-4 responses into a highly structured frontend.
## Accomplishments that we're proud of
We are immensely proud of how much we were able to accomplish in such a short timeframe, especially given the context of our lack of front-end development experience. We were able to quickly pick up and leverage Reflex in order to create a fully functional product. We also put a lot of time and careful thought into our ideation process and we are very proud of the human-centered design we were able to create.
## What we learned
Through our fully operational implementation of Rondo, we gained a rich and thorough understanding of sophisticated language processing tools such as ChatGPT and more specifically how to adapt and fine-tune such powerful models for our specific applications. We also gained many insights into user experience design through our design process which aimed to create a user-friendly interface with the target audience of the elderly population in mind.
## What's next for Rondo - A Search Engine for the Elderly Made Easy
* Faster information retrieval and summary recall using more efficient LLMs and GPT prompts
* "Query Quality" bar to measure the specificity of the prompt and incentivize continued prompt specification
* More accessible summarization tools depending on the user's reading level
## Try it out:
* Clone and cd into the Git repository
* From the root directory, run `pip install -r requirements.txt`
* Replace the `secret_key` in `summarizeWebPage.py` and `openai.key` in `follow_up_question_generation.py` with your openai key | ## Inspiration
The inspiration for Polaris came from witnessing the struggles that the elderly face in navigating the rapidly evolving digital landscape. In an age where the internet is integral to everyday life, it became clear that a significant portion of the population is left behind due to interfaces that are not designed with their needs in mind. Our goal was to create a solution that not only bridges this gap but also empowers the elderly to navigate the web with confidence and independence, ensuring that age is not a barrier to digital literacy and access.
## What It Does
Polaris is a Chrome extension that revolutionizes web accessibility for the elderly. By allowing users to describe in natural language what they wish to accomplish on a website, Polaris guides them through the necessary steps. It captures and analyzes web page components, interprets the user’s intent through advanced language models, and provides visual cues and simple instructions via an overlay. This process simplifies web navigation, making digital spaces more inclusive and user-friendly.
## How We Built It
We built Polaris using a combination of different frontend frameworks, APIs and machine learning technologies. The front end, developed as a Chrome extension and accompanying web app, captures user input and webpage elements processed using Beautiful Soup. The backend, powered by multi-modal language models, processes this data to understand the context and intent behind user commands. We leveraged open-source models like Llama2 and Mistral8x7B provided by together.ai's inference APIs and maximized their potential with prompt engineering. Our user-focused interface built with a combination of React, MaterialUI and Vite, allows us to simplify navigating the web for people of all age.
## Challenges We Ran Into
One of the main challenges was ensuring accurate segmentation and interpretation of web page components in real time, which required optimizing our models for speed without sacrificing accuracy. Another challenge was designing an intuitive user interface that could be easily navigated by elderly users, necessitating several iterations based on user feedback.
## Accomplishments That We're Proud Of
We are particularly proud of developing a solution that significantly improves web accessibility for the elderly, a demographic often overlooked in technology design. Successfully integrating complex AI technologies into a user-friendly application that can run efficiently as a Chrome extension stands as a testament to our team’s dedication and technical prowess.
## What We Learned
Throughout this project, we gained deeper insights into the challenges of web accessibility and the potential of AI to address these issues. We learned the importance of user-centered design, especially when creating technology for populations with specific needs. Additionally, we honed our skills in AI development, particularly in prompt engineering and data processing, and learned valuable lessons in teamwork, project management, and iterative design.
## What's Next for Polaris
Looking ahead, we aim to expand Polaris's capabilities to cover more complex web interactions and support additional browsers beyond Chrome. Our vision is a digital world whose benefits are universally accessible to everyone, irrespective of age. We see what is now just a simple but powerful chrome extension one day turning into a suite of tools aimed at improving equitable access to the Internet, providing seamless integrations with other technologies like browsers and OSes." | ## Inspiration
We aimed to build smart cities by helping garbage truck drivers receive the most efficient route in near-real-time, minimizing waste buildup in prone areas. This approach directly addresses issues of inefficiency in traditional waste collection, which often relies on static daily or weekly routes.
You might be wondering, why not just let drivers follow standard routes? In densely populated areas, waste accumulates faster than in less populated zones. This means that a one-size-fits-all route approach leads to inefficient pickups, resulting in garbage buildup that contributes to air and water pollution. By targeting areas where waste accumulates more quickly, we can reduce contamination, improve air quality, and create greener, healthier environments. Additionally, optimizing waste collection can lead to more sustainable use of resources and reduce the strain on landfills.
## What it does
CleanSweep has sensors that collect real-time information about all the trash cans in the city, detecting different waste levels. This data is live-updated in the truck drivers' main portal while they drive, allowing them to receive the most efficient collection routes. These routes are optimized based on real-time data, including the percentage of trash bin capacity filled, traffic conditions between bins, and the number of days since the last pickup. As drivers collect trash, they can update their progress live to receive the next optimized route for the remaining bins.
## How we built it
For our hardware: We used two phone cameras in two similar scenarios that sent photos repeatedly to a computer. This image is then passed onto the Raspberry Pi where Python OpenCV was used to detect the trash level filled. Then a Node.JS script was used to display the levels (1 to 5) on a set of LEDs and then pass the information to a local server for further processing on the software side.
Backend: We received the reported trash level data from the hardware, information about the traffic in the surrounding vicinity as well as distance to the bin from the Google Maps API and Google Distance Matrix API, and pre-modeled values for time since last garbage retrieval and fed this into a Random Forest Classifier model on Databricks. Our model makes predictions on how to prioritize the bin routes in order to get a short path and distance–resulting in fewer emissions. An adjacency matrix was then used to retrieve the highest priority paths based on traffic and waste levels.
Frontend: We used React for the UI and TailwindCSS for styling to create the portal, deployed over Terraform. We brought all of this information together to display the recommended optimal routes based on how full the bins were (which can be changed by adding more trash to bin 1 and 2 in real life).
## Challenges we ran into
Measuring the level of the trash bin was actually a very challenging task for what seemed like a simple task. We first tried looking for pre-trained models that could do it for us, but other models only worked with object tracking. We went to OpenCV, but that only worked in certain conditions. Thus, one of the hardest challenges we experienced was making sure the lighting conditions, the setup for the cameras and hardware, and our OpenCV contour algorithm worked deterministically, as even a shadow could mess with our results.
## Accomplishments that we're proud of
None of us had ever worked with OpenCV or with using hardware at a hackathon. We wanted to take some of the skills we learned in our electrical classes and implement them with a cool solution, even if it was a bit tacky. We were super proud of learning how OpenCV worked and how we could use hardware as well as a network to create a complete interactive solution. Another major component of this hackathon was hardware as we’ve always wanted to use such a concept to visualize our project and make it easier to understand for everyone on what our goals are. We enjoyed our varied skills by stepping out of our comfort zone and using new technologies; each member used a new skill to implement in the project.
## What we learned
All of us learned about the parts of networks and hardware required to communicate including the security standards and communication frequency needed to keep all the data real time. We learned how to build a working website with a login system through the use of react and vite, as well as implementing the Google maps API for visualizing the route for the drivers. We also learned how GPIO works on a raspberry pi and how a simple JS script can control the power output for these pins to control LEDs. Another skill we found interesting was the use of OpenCV for image processing as it made computing the percentage a task of a second rather than a longer time of manual image processing algorithms. Finally, we were able to start using an ML model to train and help us find the best routes for drivers, this resulted in faster travel times as traffic was avoided and garbage collection optimized.
## What's next for Clean Sweep
Clean Sweep’s next goals would be to implement real time sensors in bins spread out all over the city to collect data and help train the model by providing the large amounts of data. This would then help implement this real time UI to efficiently visit all the necessary stops for the day. Such a system would also allow for coverage over rare cases such as a garbage truck missing a stop or allowing other trucks to take over a broken truck’s schedule as the route would update in real time to collect all garbage fast and efficiently. | losing |
## Inspiration
When I was a young kid in my 1st and 2nd grade I always found exams to be very boring and the questions asked on these exams were really irrelevant. When we are that young we don't care about how many objects some hypothetical character has, but what is important is that we are enjoying the process of learning. So to assist the teachers in making the learning process enjoyable, I would like to present a tool to make the questions themselves fun and more involving.
## What it does
It takes the input from the user, which is a question from first/second grade level math. It then converts this question into a comic like format so that kids of that age group can have a fun time while trying to solve those questions. It currently only supports one type of question.
## How we built it
Since it supports only one kind of question, the question was designed in a way to get the right prompt. The question gets converted to the desired prompt through simple string addition. Then this prompt is fed into the Open AI image generator api to get the desired images to construct the comic. First four separate images are made to show the different situations in the question, then these four images are stitched together to make a single comic strip.
## Challenges we ran into
1. Designing the prompt for the question.
## Accomplishments that we're proud of
I was able to get the prompt for image generation right. So most of the times the comic generated is correct for the question asked.
## What we learned
How to code software 3.0, that is making use of prompts to code creative solutions. I also learned about how image processing works at a higher level.
## What's next for MathVis
Supporting other types of questions, even venture out of the domain of math support other subjects. | ## Inspiration
We loved picture books as children and making up our own stories. Now, it's easier than ever to write these stories into a book using AI!
## What it does
* Helps children write their own stories
* Illustrates stories for children
* Collects a child's story into a picture book, sharable to their friends and family
* Use the emotion of your voice to guide the story
## How we built it
We used
* React for the UI (display and state management)
* hume.ai to facilitate the end-to-end conversation
* DALL-E to illustrate stories
* Firebase for saving stories
## Challenges we ran into
### 1. Expiring Image URLs
The format of the OpenAI DALL-E API's response is an image URL. We encountered two challenges with this URL: latency and expiration. First, the response took up to five seconds to load the image. Second, the images expired after a set number of hours, becoming inaccessible and broken on our site.
To solve this challenge, we downloaded the image and re-uploaded it to Firebase storage, replacing the stored image URL with the Firebase URL. This was not possible on our existing frontend due to CORS, so we wrote a node backend to perform this processing.
### 2. Sensitive Diffusion Model Prompts
Initially, we directly used the generated story text of each page as the prompt to DALL-E, the diffusion model we used for illustrations. The generated images were extremely low quality and oftentimes did not match the prompt at all.
We suspect the reason is that diffusion models are trained quite differently from transformers. While transformers are pre-trained on the next token prediction task with very long text sequences, diffusion models are trained to accept a shorter prompts with more modifying attributes.
Therefore, we added a preprocessing step that extracts a five-word summary of the prompt and a list of five attributes. This step dramatically improved the quality of output illustrations.
## Accomplishments that we're proud of
* An end-to-end loop to create a story book by speaking the story aloud
* An aesthetic interface for viewing finished story books
## What we learned
* AI prompts are very sensitive (esp. prompts for diffusion models). Even the difference of a single word can drastically change the output!
## What's next for StoryBook AI
To focus on the core functionality of the app, we omitted several things that we would want to build after the hackathon:
1. User accounts. Users should be able to create accounts, possibly with linked parental accounts also for parents to view stories their children made.
2. Difficulty settings. Our goal is to improve the creativity and storytelling abilities of children. Younger children may need more assistance with difficulty while older children may focus on more complex literary elements such as plot and character development. We would like to tailor the questions the AI raises to each child's ability.
3. Customization. Users should be able to customize the look and feel of their own stories including themes and special styles. | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | losing |

## Inspiration
📚In our current knowledge economy, our information is also our most important valuable commodity.
💡 Knowledge is available in almost infinite abundance 📈, delivered directly through our digital devices 📱 💻 , the world is more connected and educated than ever across the globe 🌎 🌍 🌏. However, the surge of information draws adverse effects💥 🔥 🌈! With information circulating as rapid as ever, information and cognitive overload 🧠👎🏼 is a present symptom amongst our lives.
✨💡✨Mr. Goose 🦢 is here to help by aggregating millions of sources to simplify complex concepts into comprehensible language for even a five-year-old. ✨💡✨
## What it does
It is a chrome extension for users to conveniently type in questions, 💡 highlight 💡 paragraphs, sentences, or words on their browser, and receive a ⭐️simple to understand answer or explanation. 🎇 🎆
## How we built it
✨🔨Our chrome extension was built using JavaScript, HTML, and CSS, using Rest API. ✨As for the backend, functions are deployed on Google Cloud Functions ☁️☁️☁️and calls the Google Cloud Language API☁️☁️☁️, which uses Natural Language Processing 💬 💡 to figure out what entities are in the highlighted text. Once we’ve figured out what the text is about, we use it to parse the web using APIs such as the Reddit API, the StackOverflow/Stack Exchange API, and the Wikipedia API. ⭐️⭐️⭐️
## Challenges we ran into
One of the 💪 main challenges 💪 we ran into was while building 🌼👩🏼💻 🌻 the wireframes of the extension, discussing 💭💭 and re-evaluating the logic of the app’s uses. ✨ As we were 🔨 designing 🧩 several features, we tried to discuss what features would be the most user-friendly and important while also maintaining the efficiency 📈 📈 📈 and importance of learning/knowledge 📚of our Chrome extension. ✨
## Accomplishments that we're proud of
✨✨✨We were extremely proud ⭐️ of the overall layout and elements 🧩🧩🧩 we implemented into our app design, such as the animated goose 🦢 that one of our team members drew and animated from scratch. From the color 🔴 🟠 🟡 choices to the attention to details like which words 💬 📃 📄 should be important in the NLP API to the resulted information 📊, we had to take a lot into consideration for our project, and it truly was a fun learning experience. 👍👍👍
## What we learned
🌟 How to create a Chrome Extension
🌟 How to use Google Firebase
🌟 How to use Google Cloud's NLP API, Stack Exchange API, Reddit API, Wikipedia API
🌟 How to integrate all of these together
🌟 How to create animated images for implementation on the extensions
## What's next for Mr. Goose
✨Adapting our extensions compatibility with other browsers.
✨Adding a voice recognition feature to allow users to ask questions and receive simplified answers in return
✨Adding ability to access images while on the extension | ## Inspiration
The health industry challenge got us thinking about prescriptions, how unreliable our current system is and how we lack the ability to track a patient's history. Tracking medical history is often crucial in recognizing the signs and symptoms of a medical condition.
## What it does
Provides a user access to their current prescriptions and prescription history.
Allows user to authenticate their doctors, nurses and pharmacists using a QR code. This gives the patient autonomy over their own data.
## How we built it
Mobile application (Flutter) : Allows patient to view own history, print out prescriptions, etc.
Web application (HTML/CSS/JS) : Doctor or pharmacist will generate a QR code. Patient scans the QR code, authenticating the professional, and the professional will be automatically redirected to user's profile (after first use, the professional will remain authenticated until user removes authorization). Doctor can then fill out prescription, which can be accessed by patient and pharmacist.
Database (Firebase): Database to contain user accounts - three account types: Medical professional (can write prescriptions), Pharmaceutical professional (can fill prescriptions) and Patient (administrator, accesses their own history and who has access to it)
## Challenges we ran into
Choosing the right database structure.
Communicating virtually is also a barrier.
## Accomplishments that we're proud of
Potentially an integrated system that would change how medical data is approached.
## What we learned
It is worth taking the time to think of an idea worth doing. This was not the idea we originally were going to go with.
## What's next for Pocket Prescription
We implemented basic requirements. We plan to continue the project, the next step being researching security methods. | ## Inspiration
Has your browser ever looked like this?

... or this?

Ours have, *all* the time.
Regardless of who you are, you'll often find yourself working in a browser on not just one task but a variety of tasks. Whether its classes, projects, financials, research, personal hobbies -- there are many different, yet predictable, ways in which we open an endless amount of tabs for fear of forgetting a chunk of information that may someday be relevant.
Origin aims to revolutionize your personal browsing experience -- one workspace at a time.
## What it does
In a nutshell, Origin uses state-of-the-art **natural language processing** to identify personalized, smart **workspaces**. Each workspace is centered around a topic comprised of related tabs from your browsing history, and Origin provides your most recently visited tabs pertaining to that workspace and related future ones, a generated **textual summary** of those websites from all their text, and a **fine-tuned ChatBot** trained on data about that topic and ready to answer specific user questions with citations and maintaining history of a conversation. The ChatBot not only answers general factual questions (given its a foundation model), but also answers/recalls specific facts found in the URLs/files that the user visits (e.g. linking to a course syllabus).
Origin also provides a **semantic search** on resources, as well as monitors what URLs other people in an organization visit and recommend pertinent ones to the user via a **recommendation system**.
For example, a college student taking a History class and performing ML research on the side would have sets of tabs that would be related to both topics individually. Through its clustering algorithms, Origin would identify the workspaces of "European History" and "Computer Vision", with a dynamic view of pertinent URLs and widgets like semantic search and a chatbot. Upon continuing to browse in either workspace, the workspace itself is dynamically updated to reflect the most recently visited sites and data.
**Target Audience**: Students to significantly improve the education experience and industry workers to improve productivity.
## How we built it

**Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS
**Frameworks and Tools**: Firebase ∙ React.js ∙ Flask ∙ LangChain ∙ OpenAI ∙ HuggingFace
There are a couple of different key engineering modules that this project can be broken down into.
### 1(a). Ingesting Browser Information and Computing Embeddings
We begin by developing a Chrome Extension that automatically scrapes browsing data in a periodic manner (every 3 days) using the Chrome Developer API. From the information we glean, we extract titles of webpages. Then, the webpage titles are passed into a pre-trained Large Language Model (LLM) from Huggingface, from which latent embeddings are generated and persisted through a Firebase database.
### 1(b). Topical Clustering Algorithms and Automatic Cluster Name Inference
Given the URL embeddings, we run K-Means Clustering to identify key topical/activity-related clusters in browsing data and the associated URLs.
We automatically find a description for each cluster by prompt engineering an OpenAI LLM, specifically by providing it the titles of all webpages in the cluster and requesting it to output a simple title describing that cluster (e.g. "Algorithms Course" or "Machine Learning Research").
### 2. Web/Knowledge Scraping
After pulling the user's URLs from the database, we asynchronously scrape through the text on each webpage via Beautiful Soup. This text provides richer context for each page beyond the title and is temporarily cached for use in later algorithms.
### 3. Text Summarization
We split the incoming text of all the web pages using a CharacterTextSplitter to create smaller documents, and then attempt a summarization in a map reduce fashion over these smaller documents using a LangChain summarization chain that increases the ability to maintain broader context while parallelizing workload.
### 4. Fine Tuning a GPT-3 Based ChatBot
The infrastructure for this was built on a recently-made popular open-source Python package called **LangChain** (see <https://github.com/hwchase17/langchain>), a package with the intention of making it easier to build more powerful Language Models by connecting them to external knowledge sources.
We first deal with data ingestion and chunking, before embedding the vectors using OpenAI Embeddings and storing them in a vector store.
To provide the best chat bot possible, we keep track of a history of a user's conversation and inject it into the chatbot during each user interaction while simultaneously looking up relevant information that can be quickly queries from the vector store. The generated prompt is then put into an OpenAI LLM to interact with the user in a knowledge-aware context.
### 5. Collaborative Filtering-Based Recommendation
Provided that a user does not turn privacy settings on, our collaborative filtering-based recommendation system recommends URLs that other users in the organization have seen that are related to the user's current workspace.
### 6. Flask REST API
We expose all of our LLM capabilities, recommendation system, and other data queries for the frontend through a REST API served by Flask. This provides an easy interface between the external vendors (like LangChain, OpenAI, and HuggingFace), our Firebase database, the browser extension, and our React web app.
### 7. A Fantastic Frontend
Our frontend is built using the React.js framework. We use axios to interact with our backend server and display the relevant information for each workspace.
## Challenges we ran into
1. We had to deal with our K-Means Clustering algorithm outputting changing cluster means over time as new data is ingested, since the URLs that a user visits changes over time. We had to anchor previous data to the new clusters in a smart way and come up with a clever updating algorithm.
2. We had to employ caching of responses from the external LLMs (like OpenAI/LangChain) to operate under the rate limit. This was challenging, as it required revamping our database infrastructure for caching.
3. Enabling the Chrome extension to speak with our backend server was a challenge, as we had to periodically poll the user's browser history and deal with CORS (Cross-Origin Resource Sharing) errors.
4. We worked modularly which was great for parallelization/efficiency, but it slowed us down when integrating things together for e2e testing.
## Accomplishments that we're proud of
The scope of ways in which we were able to utilize Large Language Models to redefine the antiquated browsing experience and provide knowledge centralization.
This idea was a byproduct of our own experiences in college and high school -- we found ourselves spending significant amounts of time attempting to organize tab clutter systematically.
## What we learned
This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves.
## What's next for Origin
We believe Origin will become even more powerful at scale, since many users/organizations using the product would improve the ChatBot's ability to answer commonly asked questions, and the recommender system would perform better in aiding user's education or productivity experiences. | losing |
## Inspiration
Our team members have all witnessed the struggles our grandparents face due to language barriers in healthcare. They constantly need to call our parents to translate whenever a doctor or nurse visits their room. When our parents are unavailable, these challenges become even more daunting. This inspired us to create Care Voice, a solution to ensure seamless communication for non-English speaking patients.
## What it does
CareVoice is a medical translator mobile app designed to bridge the communication gap between doctors and patients. It records and translates the doctor's explanations into language that is easily understandable by the patient. Additionally, it provides background information on the doctor's recommended procedures.
CareVoice tailors its services to each user by considering their personal information, such as age, primary languages or dialects spoken, health conditions, and disabilities. This allows the app to customize translations and features to enhance accessibility. For example, it can enlarge text, offer read-aloud options, and use multisensory input to cater to various patient needs. CareVoice ensures that all patients, regardless of their disabilities, can receive and understand their medical information clearly.
## How we built it
We designed our application using Figma and implemented the front end in Swift using SwiftUI in XCode. For the backend, we integrated several advanced technologies: DeepL, which provides context-sensitive translations. Groq which was used to integrate Whisper AI which transcribes the doctor's voice into English text, as well as Meta LLaMA 70B, which offers additional information on medical terms and context, and provides personalized recommendations based on the patient's medical history. Lastly, Eleven Labs, offered various voice options for a richer user experience. This combination allows us to deliver a seamless, user-friendly, and highly personalized application.
## Challenges we ran into
We encountered several challenges while integrating the backend with the frontend, particularly in connecting our text-to-voice feature with the existing frontend components. Additionally, since it was our first time working with backend development, we faced a steep learning curve in setting up and managing the backend infrastructure.
## Accomplishments that we're proud of
Despite our initial lack of experience with mobile application development, we successfully implemented a functional backend and developed a text-to-speech feature. We are also proud of our ability to design and implement the app's user interface within the given time constraints.
## What we learned
Throughout this project, we gained valuable insights into creating a text-to-voice, and voice to text feature, implementing our design using Swift, and effectively connecting the front end with the back end. These experiences have significantly enhanced our skills and understanding of mobile app development.
## What's next for Care Voice:
Due to time constraints, we couldn't implement all the features we envisioned. In the future, we plan to add text analysis capabilities, and the ability to save data for reviewing past results and analyzed information. These enhancements will further improve CareVoice's utility and user experience. | ## Inspiration
This project was inspired by Leon's father's first-hand experience with a lack of electronic medical records and realizing the need for more accessible patient experience.
## What it does
The system stores patients' medical records. it also allows patients to fill out medical forms using their voice, as well as electronically sign using their voice as well. Our theme while building it was accessibility, hence the voice control integration, simple and easy to understand UI, and big, bold colours.
## How I built it
The front end is built on react-native, while the background is built in node.js using MongoDB Atlas as our database. For our speed to text processing, we used the Google Cloud Platform. We also used Twilio for our SMS reminder component.
## Challenges We ran into
There are three distinct challenges that we ran into. The first was trying to get Twilio to function correctly within the app. We were trying to use it on the frontend but due to the nature of react native, and some node.js libraries that were being used, it was not working. We solved the problem by deploying to a Heroku serving and making REST calls.
A second challenge was trying to get the database queries to work from our backend. Although everything seems right it still did not work but to do attention to detail, and going over code multiple times, the mistake was spotted and corrected.
The third and likely biggest challenge we faced was getting the speech to text streaming input to co-operate. In the beginning, it did not stop recording at the correct times and would capture a lot of noise from the background. This problem was eventually solved by redoing it by following a tutorial online.
## Accomplishments that I'm proud of
**WE FINISHED!** We honestly did not expect to finish if you asked us at 10 pm on Saturday night. However, things came through well which we were really proud of. We are also really proud of our UI/UX and think it is a very sleek and clean design. Two other things include accurate speech to text processing and dynamically filled values through our database at runtime.
## What I learned
**Joshua** - How to write server-side Javascript using node.js
**Leon** - Twilio
**Joy** - Speech to text streaming with react native
**Kevin** - React-native
## What's next for MediSign
If we were to continue to work on this project, we would first start by dynamically filling all values through our database. We would then focus a lot of attention on security as medical records are sensitive info. Thirdly, we would upgrade the UI/UX to be even better than before. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | losing |
## Inspiration
CVS Health Challenge
## What it does
Calculate BMI and give health tips
## How we built it
Android Studio
## Challenges we ran into
Learning how to use Android Studio
## Accomplishments that we're proud of
Learning how to use Android Studio
## What we learned
How to use Android Studio
## What's next for BMI Calculator
Add other health tips like skin care | ## Inspiration
It's very difficult to accurately estimate how much food is going into your stomach. Many existing apps provide inaccurate calorie counts since the food items are largely generalized.
## What it does
Our app allows users to take pictures of food items and it returns the amount of calories, what they must do to burn that many calories, as well as keep track of what food and how many calories they are eating throughout the day.
## How I built it
Programmed the stepper motor along with the pressure sensor using the Arduino IDE. Created the Android app using Android Studio. Connected to CloudSight API for object recognition processing.
## Challenges I ran into
Setting up the CloudSight API was quite troublesome.
## Accomplishments that I'm proud of
Creating a polished mechanical system that is not only functional, but aesthetically pleasing as well. Learning about image processing.
## What I learned
Before we finalized our project, we learned the process of 3D reconstruction using 2D images. We improved our image processing skills using OpenCV and OpenGL. Also, we learned how to integrate APIs for object recognition.
## What's next for Food Facts
Food Facts can be used by anyone who watches their diet, whether they are trying to gain muscle or lose fat. It will be more accurate than other apps on the market currently and it is an easier and faster process. | ## Inspiration
Exercise is extremely important and is most effective and healthy when it is evenly applied to all important muscles in your body. However, as students, many of us don't have enough time to finish a complete workout routine that trains every important muscle, and so we split up our exercise routines over several days. However, it can be a hassle to track which muscles you haven't exercised as often, generating friction between a busy student and quality exercise. SmartFit seeks to solve this issue.
## What it does
SmartFit is a workout assistant that helps users to become more healthy. It's built upon two main features: logging and recommendation. It tracks your workouts using the Google Assistant and tailors a workout that would benefit the user the most based on an undertrained muscle group, determined by analysis of their previous workouts. SmartFit is also beginner friendly since it offers a tutorial for every different workout. Through SmartFit, anyone can become fit and healthy in a smart way. SmartFit is like a personal trainer that can be accessed at any time and anywhere.
## How we built it
We built SmartFit using Voiceflow with calls to Google Sheets’ APIs and deployed on Google Actions to create a fully functional Google Assistant app. Smartfit’s workout logs are stored on Google Sheets and workout recommendations are generated automatically based on past workout data as well as exercise information curated from various professional fitness websites. The recommendation algorithm was designed using smart techniques and formulas within the Google Sheets Platform. JavaScript was used in order to parse user information of logs and prediction of recommendations within Voiceflow. We also made a static website that shows information regarding what SmartFit does.
## Challenges we ran into
Since Voiceflow is a relatively new application, we had a hard time finding assistance on certain features that we were unsure about. For example, Voiceflow has the capability to display images using the Card Block or the Display Block, but it was very hard to find help regarding those issues. Due to the lack of documentation, it was very hard to solve problems promptly without halting all progress. Another challenge that we faced was the integration of Google’s Firebase. We ended up using Google Sheets instead of firebase because Google Sheets is already integrated with Voiceflow and is faster and more effective that Firebase in this application.
## Accomplishments that we’re proud of
We’re proud of creating a robust voice app using Voiceflow and creating a Google Action usable on Google Assistant.
## What we learned
We learned how to use a new application called Voiceflow in order to develop Google Assistant and Amazon Alexa applications without any pre-existing knowledge. We learned how to use Google’s Dialogflow in order to style the front end of SmartFit in the Google Assistant environment. We taught ourselves the intricacies of Google’s API client in order to host SmartFit to any Gmail email address that the user wishes.
## What's next for SmartFit
Since SmartFit is on Google’s API platform, sending the app to worldwide alpha testers can be easily accomplished. From the alpha testers’ feedback, more features and user interface improvements will be implemented. The next step for SmartFit is to migrate to a more robust database and increase user customizability via FireBase and MongoDB. The Artificial Intelligence to predict workout recommendations would be implemented using TensorFlow, instead of hardcoded smart algorithms, due to a larger set of users.
After the technology side of things is taken care of, SmartFit can have the opportunity to be implemented into the marketplace. Users can pay for the SmartFit service with different plans in order to have more data analysis and access to tutorials. | losing |
## Foreword
Before we begin, a **very big thank you** to the NVIDIA Jetson team for their generosity in making this project submission possible.
## Inspiration
Nearly 100% of sight-assistance devices for the blind fall into just two categories: Voice assistants for navigation, and haptic feedback devices for directional movement. Although the intent behind these devices is noble, they fail in delivering an effective sight-solution for the blind.
Voice assistant devices that relay visual information from a camera-equipped computer to the user are not capable of sending data to the user in real time, making them very limited in capability. Additionally, the blind are heavily dependent on their hearing in order to navigate environments. They have to use senses besides vision to the limit to make up for their lack of sight, and using a voice assistant clogs up and introduces noise to this critical sensory pathway.
The haptic feedback devices are even more ineffective; these simply tell the user to move left, right, backwards, etc. While these devices provide real-time feedback and don’t introduce noise to one’s hearing like with the voice assistants, they provide literally no information regarding what is in front of the user; it simply just tells them how to move. This doesn’t add much value for the blind user.
It's 2021. Voice assistant and haptic feedback directional devices are a thing of the past. Having blind relatives and friends, we wanted to create a project that leverages the latest advancements in technology to create a truly transformative solution. After about a week's worth of work, we've developed OptiLink; a brain machine interface that feeds AI-processed visual information **directly to the user's brain** in real-time, eliminating the need for ineffective voice assistant and directional movement assistants for the blind.
## What it does
OptiLink is the next generation of solutions for the blind. Instead of using voice assistants to tell the user what’s in front of them, it sends real-time AI processed visual information directly to the user’s brain in a manner that they can make sense of. So if our object detection neural network detects a person, the blind user will actually be able to tell that a person is in front of them through our brain-machine interface. The user will also be able to gauge distance to environmental obstacles through echolocation, once again directly fed to their brain.
Object detection is done through a camera equipped NVIDIA Jetson Nano; a low-power single board computer optimized for deep learning. A Bluetooth enabled nRF52 microcontroller connected to an ultrasonic sensor provides the means to process distances for echolocation. These modules are conveniently packed in a hat for use by the blind.
On the Nano, an NVIDIA Jetpack SDK accelerated MobileNet neural network detects objects (people, cars, etc.), and sends an according output over Bluetooth via the Bleak library to 2 Neosensory Buzz sensory substitution devices located on each arm. These devices, created by neuroscientists David Eagleman and Scott Novich at the Baylor School of Medicine, contain 4 LRAs to stimulate specific receptors in your skin through patterns of vibration. The skin receptors send electrical information to your neurons and eventually to your brain, and your brain can learn to process this data as a sixth sense.
Specific patterns of vibration on the hands tell the user what they’re looking at (for example, a chair will correspond to pattern A, a car will correspond to pattern B). High priority objects like people and cars will be relayed through feedback from the right hand, while low priority objects (such as kitchenware and laptops) will be relayed via feedback from the left hand. There are ~90 such possible objects that can be recognized by the user. Ultrasonic sensor processed distance is fed through a third Neosensory Buzz on the left leg, with vibrational intensity corresponding to distance to an obstacle.
## How we built it
OptiLink's object detection inferences are all done through the NVIDIA Jetson Nano running MobileNet. Through the use of NVIDIA's TensorRT to accelerate inferencing, we were able to run this object detection model at a whopping 24 FPS with just about 12 W of power. Communication with the 2 Neosensory Buzz feedback devices on the arm were done through Bluetooth Low Energy via the Bleak library and the experimental Neosensory Python SDK. Echolocation distance processing is done through an Adafruit nRF52840 microcontroller connected to an ultrasonic sensor; it relays processed distance data (via Bluetooth Low Energy) to a third Neosensory Buzz device placed on the leg.
## Challenges we ran into
This was definitely the most challenging to execute project we've made to date (and we've made quite a few). Images have tons of data, and processing, condensing, and packaging this data into an understandable manner through just 2 data streams is a very difficult task. However, by grouping the classes into general categories (for example cars, motorcycles, and trucks were all grouped into motor vehicles) and then sending a corresponding signal for the grouped category, we could condense information into a manner that is more user friendly. Additionally, we included a built-in frame rate limiter, which prevents the user from receiving way too much information too quickly from the Neosensory Buzz devices. This allows the user to far more effectively understand the vibrational data from the feedback devices.
## Accomplishments that we're proud of
We think we’ve created a unique solution to sight-assistance for the blind. We’re proud to have presented a fully functional project, especially considering the complexities involved in its design.
## What we learned
This was our first time working with the NVIDIA Jetson Nano. We learned a ton about Linux and how to leverage NVIDIA's powerful tools for machine learning (The Jetpack SDK and TensorRT). Additionally, we gained valuable experience with creating brain-machine interfaces and learned how to process and condense data for feeding into the nervous system.
## What's next for OptiLink
OptiLink has room for improvement in its external design, user-friendliness, and range of features. The device currently has a learning curve when it comes to understanding all of the patterns; of course, it takes time to properly understand and make sense of new sensory feedback integrated into the nervous system. We could create a mobile application for training pattern recognition. Additionally, we could integrate more data streams in our product to allow for better perception of various vibrational patterns corresponding to specific classes. Physical design elements could also be streamlined and improved. There’s lots of room for improvement, and we’re excited to continue working on this project! | ## Inspiration
Our project, "**Jarvis**," was born out of a deep-seated desire to empower individuals with visual impairments by providing them with a groundbreaking tool for comprehending and navigating their surroundings. Our aspiration was to bridge the accessibility gap and ensure that blind individuals can fully grasp their environment. By providing the visually impaired community access to **auditory descriptions** of their surroundings, a **personal assistant**, and an understanding of **non-verbal cues**, we have built the world's most advanced tool for the visually impaired community.
## What it does
"**Jarvis**" is a revolutionary technology that boasts a multifaceted array of functionalities. It not only perceives and identifies elements in the blind person's surroundings but also offers **auditory descriptions**, effectively narrating the environmental aspects they encounter. We utilize a **speech-to-text** and **text-to-speech model** similar to **Siri** / **Alexa**, enabling ease of access. Moreover, our model possesses the remarkable capability to recognize and interpret the **facial expressions** of individuals who stand in close proximity to the blind person, providing them with invaluable social cues. Furthermore, users can ask questions that may require critical reasoning, such as what to order from a menu or navigating complex public-transport-maps. Our system is extended to the **Amazfit**, enabling users to get a description of their surroundings or identify the people around them with a single press.
## How we built it
The development of "**Jarvis**" was a meticulous and collaborative endeavor that involved a comprehensive array of cutting-edge technologies and methodologies. Our team harnessed state-of-the-art **machine learning frameworks** and sophisticated **computer vision techniques** to get analysis about the environment, like , **Hume**, **LlaVa**, **OpenCV**, a sophisticated computer vision techniques to get analysis about the environment, and used **next.js** to create our frontend which was established with the **ZeppOS** using **Amazfit smartwatch**.
## Challenges we ran into
Throughout the development process, we encountered a host of formidable challenges. These obstacles included the intricacies of training a model to recognize and interpret a diverse range of environmental elements and human expressions. We also had to grapple with the intricacies of optimizing the model for real-time usage on the **Zepp smartwatch** and get through the **vibrations** get enabled according to the **Hume** emotional analysis model, we faced issues while integrating **OCR (Optical Character Recognition)** capabilities with the **text-to speech** model. However, our team's relentless commitment and problem-solving skills enabled us to surmount these challenges.
## Accomplishments that we're proud of
Our proudest achievements in the course of this project encompass several remarkable milestones. These include the successful development of "**Jarvis**" a model that can audibly describe complex environments to blind individuals, thus enhancing their **situational awareness**. Furthermore, our model's ability to discern and interpret **human facial expressions** stands as a noteworthy accomplishment.
## What we learned
# Hume
**Hume** is instrumental for our project's **emotion-analysis**. This information is then translated into **audio descriptions** and the **vibrations** onto **Amazfit smartwatch**, providing users with valuable insights about their surroundings. By capturing facial expressions and analyzing them, our system can provide feedback on the **emotions** displayed by individuals in the user's vicinity. This feature is particularly beneficial in social interactions, as it aids users in understanding **non-verbal cues**.
# Zepp
Our project involved a deep dive into the capabilities of **ZeppOS**, and we successfully integrated the **Amazfit smartwatch** into our web application. This integration is not just a technical achievement; it has far-reaching implications for the visually impaired. With this technology, we've created a user-friendly application that provides an in-depth understanding of the user's surroundings, significantly enhancing their daily experiences. By using the **vibrations**, the visually impaired are notified of their actions. Furthermore, the intensity of the vibration is proportional to the intensity of the emotion measured through **Hume**.
# Ziiliz
We used **Zilliz** to host **Milvus** online, and stored a dataset of images and their vector embeddings. Each image was classified as a person; hence, we were able to build an **identity-classification** tool using **Zilliz's** reverse-image-search tool. We further set a minimum threshold below which people's identities were not recognized, i.e. their data was not in **Zilliz**. We estimate the accuracy of this model to be around **95%**.
# Github
We acquired a comprehensive understanding of the capabilities of version control using **Git** and established an organization. Within this organization, we allocated specific tasks labeled as "**TODO**" to each team member. **Git** was effectively employed to facilitate team discussions, workflows, and identify issues within each other's contributions.
The overall development of "**Jarvis**" has been a rich learning experience for our team. We have acquired a deep understanding of cutting-edge **machine learning**, **computer vision**, and **speech synthesis** techniques. Moreover, we have gained invaluable insights into the complexities of real-world application, particularly when adapting technology for wearable devices. This project has not only broadened our technical knowledge but has also instilled in us a profound sense of empathy and a commitment to enhancing the lives of visually impaired individuals.
## What's next for Jarvis
The future holds exciting prospects for "**Jarvis.**" We envision continuous development and refinement of our model, with a focus on expanding its capabilities to provide even more comprehensive **environmental descriptions**. In the pipeline are plans to extend its compatibility to a wider range of **wearable devices**, ensuring its accessibility to a broader audience. Additionally, we are exploring opportunities for collaboration with organizations dedicated to the betterment of **accessibility technology**. The journey ahead involves further advancements in **assistive technology** and greater empowerment for individuals with visual impairments. | ## Inspiration
As someone who grew up a visual impairement in one of my eyes, it was clear to me how difficult it can be navigate your surroundings without external help. This project is mainly inspired by assisted parking technology and Spider-Mans "spidey-sense", as both don't require direct sight in order to feed information about surroundings to the user.
## What it does
This project aims provides a low cost and low training assistive tech for the visually impaired around the world. Using TinyML and a variety of sensors, Buzzy Sense translates information about the surroundings into into buzzes and clicks generated by a trio of of buzzers around the users head. Our model is currently trained to recognize some everyday obstacles the user may face such as ascending stairs, descending stairs, tables, and chairs, and notifies the user accordingly with customizable buzz patterns. In addition, any obstacle that is not within our model is detected by a pair of ultrasonic sensors near the users temples, and an IR obstacle sensor to detect for any sudden change in elevation near the users feet (a pothole for example). This combination of identifying common obstacles a visually impaired user may struggle with in an unfamiliar setting, while also keeping a general overview of the users surrounding environment.
## How we built it
Using a sample data of stairs, tables, and chairs from open data sets on kaggle, we trained and ran our ML model using Edge Impulse and Qualcomms TinyML Arduino Kit. In order to fine tune the model, we added over 200 images of our sample list of obstacles from around the Myhal building. We then loaded the model onto our TinyML kit and proceeded to integrate the general sensor network into our code and hardware.
## Challenges we ran into
Most of our challenges were hardware related with some complications with learning how to effectively incorporate ML into the rest of our hardware. Our hardware was greatly limited to one of each component piece from the MakeUofT hardware library, and not every component we wanted was available. The components used were generally unideal for our applications due to their size and limited range in proximity detection. However, after much tinkering we managed to fit and create an effective method of detecting the overall surroundings while giving the user enough haptic feedback to navigate without much assistance.
As for incorporating ML, the TinyML sheild took up all pins available on the kits Arduino Nano 33 BLE Sense, while our buzzer and sensor network took up all pins available on our Arduino Uno board. We wanted one cohesive script to have the two parts working in conjunction with one another to provide the best experience for the user. Initially we tried using bluetooth however it failed to connect correctly and we were pressed for time. Eventually we found a possible solution through cirect connection between the two boards.
## Accomplishments that we're proud of
The model for TinyML proved quite adept at identifying our listed obstacles after some time and effort was put into learning how to effectively build and train the model.
## What we learned
We learned how to use Edge Impulse in order to build and generate models, in addition to implementing it on an arduino nano. We also learned how to better use C++ as most of our experiences with coding were in other languages.
## What's next for Buzzy Sense
Building a better model with a larger range of identifiable objects is a high priority for us as there are a great variety of different hazards depending on the users environment. In addition to that, the ability for the user to customize the haptic feedbacks for each type of object detected would be useful, essentially allowing the user to build their own library of haptic feedbacks for daily obstructions. This would most likely be been done through an app that connects to Buzze Sense and would use an audio screenreader feature so that users may navigate settings without the need for sight. | winning |
## Inspiration
We were interested in using health data and we created a very impressive model. We decided to focus or project around it as a result
## What it does
It uses a predictive AI model to tell a user there chances of developing diabetes
## How we built it
The model was built using python tensorflow, keras and sckit-learn. We used Flask to serve the model and express as the backend api for our next js application
## Challenges we ran into
We found difficulties integrating certain APIs and ended up dropping certain technologies instead focusing on the model. we found connecting the various servers to be a challenge
## Accomplishments that we're proud of
We where able to create an extremely accurate predictive model that does not require massive amounts of data
## What we learned
We learned how to use TensorFlow keras alongside becoming more proficient in javascript, next js and flask.
## What's next for Diabeater
We aim to add more features including user authentication and give users the ability to plan out how to reduce there chances. | ## Inspiration
UV-Vis spectrophotometry is a powerful tool within the chemical world, responsible for many diagnostic tests (including water purity assessments, ELISA tests, Bradford protein quantity assays) and tools used within the environmental and pharmaceutical industry. This technique uses a detector to measure a liquid’s absorption of light, which can then be correlated to its molarity (the amount of a substance within the solution). Most UV-Vis spectrophotometers, however, are either extremely expensive or bulky, making them inideal for low-resource situations. Here, we implement an image processing and RGB sensing algorithm to convert a smartphone into a low-cost spectrophotometer to be used anywhere and everywhere.
Inspired by a team member’s experience using an expensive spectrophotometer to complete protein analysis during a lab internship, the Hydr8 team quickly realized this technology could easily be scaled into a smartphone, creating a more efficient, low-cost device.
## What it does
In this project, we have designed and developed a smartphone-based system that acts as a cheap spectrophotometer to detect and quantify contaminants in a solution.
## How I built it
We used the OpenCV Python package to segment images, isolate samples, and detect the average Red-Green-Blue color. We then wrote an algorithm to convert this color average into an absorbancy metric. Finally, we wrote functions that plot the absorbance vs concentration of the calibration images, and then use linear regression to quantify the contaminant concentration of the unknown solution.
## Challenges I ran into
Configuring various unfamiliar packages and libraries to work within our proposed computational framework
## Accomplishments that I'm proud of
For most of the team, this was the first hackathon we have participated in-- experience proved to be fun but challenging. Coming up with a novel idea as well as working together to create the necessary components are aspects of the projects we feel especially proud of.
## What's next for HYDR8
With time and effort, we hope to improve and streamline Hydr8 to create a more sensitive sensor algorithm that can detect lower concentrations of analyte. Our ultimate goal is to finalize implementation of the graphic user interface and release the app so that it can be used where most needed, in places such as developing countries and disaster-relief zones to ensure safe drinking water. | ## Inspiration
When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body.
As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately.
## What it does
The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals.
## How we built it
Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality.
## Challenges we ran into
The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave.
## Accomplishments that we're proud of
We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying.
## What we learned
One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people.
## What's next for SkinSkan
We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds. | losing |
## Inspiration:
The inspiration for Kisan Mitra came from the realization that Indian farmers face a number of challenges in accessing information that can help them improve their productivity and incomes. These challenges include:
```
Limited reach of extension services
Lack of awareness of government schemes
Difficulty understanding complex information
Language barriers
```
Kisan Mitra is designed to address these challenges by providing farmers with timely and accurate information in a user-friendly and accessible manner.
## What it does :
Kisan Mitra is a chatbot that can answer farmers' questions on a wide range of topics, including:
```
Government schemes and eligibility criteria
Farming techniques and best practices
Crop selection and pest management
Irrigation and water management
Market prices and weather conditions
```
Kisan Mitra can also provide farmers with links to additional resources, such as government websites and agricultural research papers.
## How we built it:
Kisan Mitra is built using the PaLM API, which is a large language model from Google AI. PaLM is trained on a massive dataset of text and code, which allows it to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
Kisan Mitra is also integrated with a number of government databases and agricultural knowledge bases. This ensures that the information that Kisan Mitra provides is accurate and up-to-date.
## Challenges we ran into:
One of the biggest challenges we faced in developing Kisan Mitra was making it accessible to farmers of all levels of literacy and technical expertise. We wanted to create a chatbot that was easy to use and understand, even for farmers who have never used a smartphone before.
Another challenge was ensuring that Kisan Mitra could provide accurate and up-to-date information on a wide range of topics. We worked closely with government agencies and agricultural experts to develop a knowledge base that is comprehensive and reliable.
## Accomplishments that we're proud of:
We are proud of the fact that Kisan Mitra is a first-of-its-kind chatbot that is designed to address the specific needs of Indian farmers. We are also proud of the fact that Kisan Mitra is user-friendly and accessible to farmers of all levels of literacy and technical expertise.
## What we learned:
We learned a lot while developing Kisan Mitra. We learned about the challenges that Indian farmers face in accessing information, and we learned how to develop a chatbot that is both user-friendly and informative. We also learned about the importance of working closely with domain experts to ensure that the information that we provide is accurate and up-to-date.
## What's next for Kisan Mitra:
We are committed to continuing to develop and improve Kisan Mitra. We plan to add new features and functionality, and we plan to expand the knowledge base to cover more topics. We also plan to work with more government agencies and agricultural experts to ensure that Kisan Mitra is the best possible resource for Indian farmers.
We hope that Kisan Mitra will make a positive impact on the lives of Indian farmers by helping them to improve their productivity and incomes. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | ## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper. | winning |
We went on relentlessly. Beating back against the tide (that being lack of documentation), we created something truly magnificent out of an underused device. What's next you ask? Well, we'll go on with our lives, but in the back of our minds we will always remember the times we had, and the beauty we effected in this sorry, sullen world. | ## Inspiration
Our inspiration came from the importance of connecting with family and cherishing proud personal stories. We recognized that many elderly people in nursing homes feel isolated from their families, despite the wealth of memories they carry. These memories hold so much value about family history, wisdom, and identity. By creating a platform that enables them to reflect on and share these moments, we aimed to bridge generational gaps and strengthen family bonds. Through storytelling, we want to foster a tight family bond, ensuring that cherished memories are passed down and that the elderly feel heard, valued, and connected.
We wanted to emphasize the story aspect of these memories. When people want to share their memories with their family, especially virtually, they aren't able to fully relive or cherish that memory- a simple text message can't fully do justice to a fond memory. Thus, we wanted to bring life into these memories that shared within families online and especially provide elderly people who might not meet their families often to have an immersive experience with their family's memories.
## What it does
Memento allows families to document fond memories that they have, and share them to the user. Families can upload memories that contain a date, description, and image. We target this product to the elderly in nursing homes who are usually alone and can benefit from having someone like family to talk to. The elderly user can then speak to the application and can have a conversation about the details of any memory. The application will also display the most relevant image to the conversation to help improve the experience. This enables the elderly user to feel like they are talking to a family member or someone they know well. It allows them to stay connected with their loved ones without the continuous presence of them.
## How we built it
We designed Memento to be simple and accessible for both elderly users and their families. For this reason, we used **Reflex** to implement an elegant UI, and implemented a **Chroma** database to store the memories and their embeddings for search. We also integrated Whisper, a speech-to-text model through **Groq’s** fast inference API to decode what the elderly person is saying. Using this input, we query our database, and feed this information through **Gemini**, an LLM developed by Google, to give a coherent response that incorporates information from the families’ inputs. Finally, we used **Deepgram’s** text-to-speech model to convert the LLM’s outputs back to an audio format that we could speak back to the elderly user.
## Challenges we ran into
* **Integration**: It was difficult to integrate all of the sponsor’s softwares into the final application; we had to pore over documentation while becoming familiar with each API, which led to many hours of debugging.
* **Non-determinism**: Our models were non-deterministic; errors caused by specific outputs from the LLM were hard to replicate. Due to the background noise, we also could not efficiently test our speech-to-text model’s accuracy.
* **Inference speed**: Throughout this application, we make many API calls to large models, such as Whisper, Gemini, and the Aura TTS model. Because of this, we had to find clever optimizations to speed up the inference time to quickly speak back to the elderly user, especially since the WiFi was unusable most of the time.
## Accomplishments that we're proud of
* **Design and User Experience**: We are proud of our design since it encompasses the mood we were aiming for – a warm, welcoming environment, focusing on the good things that happen in life.
* **Large Language Model and Vector Search**: We are especially proud of how the LLM turned out and how well the RAG model worked. We spent lots of time prompting the different components to create the warm, empathetic, and welcoming environment the LLM provides.
* **TTS and STT**: Although we struggled a bit with this part, we are really proud about how it turned out. We feel we did a great job encompassing the ideals of the product by allowing users to reflect on past memories and connect closer with family.
## What we learned
Working with STT and TTS models: many members of our group had never worked with speech-to-text or text-to-speech models, so this was a learning experience for all of us. We learned about the impressive accuracy that the state-of-the-art models are able to achieve but also encountered some of the drawbacks of these models, since many of them don’t work as well with moderate levels of background noise.
How to make a great UI:
## What's next for Memento
Because of time constraints, there were many features and improvements we wanted to implement but could not.
* **Continuous LLM Conversation**: We wanted to be able to talk to the LLM continuously without having to press a microphone button. Due to time constraints, we were not able to implement this feature
* **User Personalization and Customization**: We aimed to personalize the website to users by adding custom themes, colors, and fonts, but we ran out of time to do so. | ## Inspiration
a
## What it does
## How we built it
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
## What's next for Test | partial |
## Inspiration
We were recently introduced to the world of Explorable Explanations (by Nicky Case!) and were immediately enthralled by its educating yet engaging nature. Inspired, we decided to present the concept of quantum mechanics to better engage the youth who have not yet encountered this phenomenon in class.
## What it does
The code that we built is a web page for educational purposes. It talks about the applications and different phenomena of Quantum physics in a very simple way so that even high school students can understand it clearly. The webpage contains simple animations for demonstrations of the phenomena.
## How I built it
We built the webpage using HTML and CSS and created small animations using Javascript and an open-source library (EaselJS and TweenJS from create.js).
## Challenges I ran into
Jennifer: Working through the eye strain! I had some experience with HTML and CSS prior to this hackathon, but having to relearn the basics and take on a completely new language (Javascript) was an incredible challenge. From the numerous crash courses consumed to the constant research and debugging I’ve had to do in the past two days, I realized just how different of a language Javascript is compared to my experience with Python.
Karen: As a beginner programmer, trying to learn coding was a huge challenge. I had limited knowledge with HTML, and no experience with CSS or JavaScript; the latter being the most difficult to understand! This hackathon was also my first concrete experience with programming and also an introduction to software development as a whole. Debugging was also a pain.
Yudi: I have never dealt with a front-end language before. In my past experience, I have only worked with C and python, and software like MATLAB. Learning three languages in a short amount of time was the biggest challenge that I have. Our team decided on the topic and language the night before the hack, and all three of us binge-watched loads of Youtube crash courses. This was also my very first hackathon experience!
## Accomplishments that I'm proud of
Jennifer: Having actually learned and implemented a new programming language in such a short time span. There were many things on our page that I thought we wouldn’t be able to accomplish. It was an absolute pain trying to create our small animations and there were many times where I felt like giving up. We also played around with user input (and created an unrelated drawing game on the side)! But in the end, I’m proud of all the work we’ve done and for continuing to persevere through our many hurdles. As simple as our web page may be, it was the result of a lot of hard work and it was a great learning experience for all of us!
Karen: In preparation for this hackathon, I attempted to learn three languages over the course of a few days. Software development was always an intimidating field to me, but I am proud to be able to understand the basics of coding - even though I’m only familiar with three languages thus far. As a non-STEM student, coding is something I’m glad to have pursued!
Yudi: I was kind of frustrated that I didn’t understand some of the things that were going on, but in the latter half of the hack, I was able to overcome some of the confusion. Although I felt I haven’t contributed that much to the team, I did learn a lot; not just from the crash courses, but also from my teammates. We had many things that we had to clarify and google, but in the end, we grit our teeth and pull through.
## What I learned
Jennifer: I hate semicolons and have a new appreciation for Javascript. I also learned that centering divs is unnecessarily painful and have fallen in love with documentations. As well, I learned about my dire need for caffeinated chocolate and miss it dearly. On a more serious note, I learned a lot about web development! Javascript is a more flexible language that I had anticipated and its capabilities astounded me. As I struggled to stay awake, I tried to deploy our page to the web and it somehow worked (thanks Surge).
Karen: Beyond understanding the basics of coding, I also familiarized myself with the work culture surrounding software development. The community is incredibly talented and ambitious, which instills fear within me.
Yudi: I observed that humans work incredibly fast and efficient under pressure :) I really appreciate how the three languages come together so smoothly. But then again, I need to learn more about all these three languages to use them fluently. Through this experience, I realized how little I know about coding, and how vast and versatile this field actually is.
## What's next for A Brief Introduction to Quantum Mechanics
As we work on our Javascript, HTML, and CSS knowledge, we can further develop this website to be more interactive and pretty. In addition, we can add other topics to the web page. | ## Inspiration
Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis.
## What it does
What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods.
## How we built it
Using Flask, HTML,CSS, and Python for backend.
## Challenges we ran into
We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication.
## Accomplishments that we're proud of
Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life.
## What we learned
How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide.
## What's next for NutriScan
-Working ML
-Use of camera as an input to the program
-Better UI
-Responsive
-Release | ## Inspiration
In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue.
## What it does
When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to.
## How we built it
We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API.
## Challenges we ran into
Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers.
## Accomplishments that we're proud of
This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project.
## What we learned
We learned how to operate and program a DragonBoard, as well as connect various APIs together.
## What's next for Aperture
We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether. | losing |
## Inspiration
Imagine a world where the number of mass shootings in the U.S. per year don't align with the number of days. With the recent Thousand Oaks shooting, we wanted to make something that would accurately predict the probability that a place has a mass shooting given a zipcode and future date.
## What it does
When you type in a zipcode, the corresponding city is queried in the prediction results of our neural network in order to get a probability. This probability is scaled accordingly and represented as a red circle of varying size on our U.S. map. We also made a donation link that takes in credit card information and processes it.
## How we built it
We trained our neural network with datasets on gun violence in various cities. We did a ton of dataset cleaning in order to find just what we needed, and trained our network using scikit-learn. We also used the stdlib api in order to pass data around so that the input zipcode could be sent to the right place, and we also used the Stripe api to handle credit card donation transactions. We used d3.js and other external topological javascript libraries in order to create a map of the U.S. that could be decorated. We then put it all together with some javascript, HTML and CSS.
## Challenges we ran into
We had lots of challenges with this project. d3.js was hard to jump right into, as it is such a huge library that correlates data with visualization. Cleaning the data was challenging as well, because people tend not to think twice before throwing data into a big csv. Sending data around files without the usage of a server was challenging, and we managed to bypass that with the stdlib api.
## Accomplishments that we're proud of
A trained neural network that predicts the probability of a mass shooting given a zipcode. A beautiful topological map of the United States in d3. Integration of microservices through APIs we had never used before.
## What we learned
Doing new things is hard, but ultimately worthwhile!
## What's next for Ceasefire
We will be working on a better, real-time mapping of mass shootings data. We will also need to improve our neural network by tidying up our data more. | ## Inspiration
We hear stories about crimes happening around us everyday. It's especially terrifying since we're students and we just want to enjoy our college experience. While discussing ideas to hack, we realized this problem **resonated with all of us**. We immediately realized that we need to work on this.
## What it does
It gives you and your loved ones a chance to take your safety into your own hands. By using our application, you are taking control of the most important thing you have, your life. It generates the safest and shortest path from Location A to Location B.
## How we built it
We analyzed large data sets of crime over the past 10 years, picked up representative points after weighing them based on relevance and severity of crime. We then created a data anaytics flow (that can be easily repeated with new data) to glean more useful insights from the raw data. To help analyze the data further we plotted the data on Virtual Reality to look at data spreads and densities. This processed data was than pushed into our MongoDB database hosted on MLab. We then used these generalized clusters to see which areas we should avoid while routing using the Wrld and OSRM API's. From here our app generates a single route off of all the information based on what our algorithm devises to be the "safest route" for the user. This route is then plotted and displayed for the users convenience.
## Challenges we ran into
We started out with millions of cells of data, which made it really hard to work with. We had to filter out the most relevant data first and convert the data into a completely new format to optimize it for our path finding algorithm. In addition, we had to optimize our MongoDB to work well with our data as we made frequent queries to a database with originally over a million elements. In, addition, we constantly had issues mixing up latitude and longitude values as well as making HTTP requests for many APIs.
## Accomplishments that we're proud of
We are happy that we were able to create a minimum viable product in this short duration. We're especially glad it's not just a "weekend-only" idea and it is something we can continue to make better. This idea is something we hope in the future can be something that truly has a social impact and can actually make the world a safer place.
## What we learned
* Wrld/OSRM API
* Node/Express/MongoDB
-HTTP Requests/Callback functions
-Unity VR sdk
-Pandas/Numpy
* Coordinating team efforts in different parts of the Application
## What's next for SafeWorld
Using real time data to better predictions for real life incidents (protests). In addition, having more global data sets would be a optimal next step to get it working in more cities. This would be expedited by our adaptable framework for data generation and pathing. | ## Inspiration
Toronto being one of the busiest cities in the world, is faced with tremendous amounts of traffic, whether it is pedestrians, commuters, bikers or drivers. With this comes an increased risk of crashes, and accidents, making methods of safe travel an ever-pressing issue, not only for those in Toronto, but for all people living in Metropolitan areas. This led us to explore possible solutions to such a problem, as we believe that all accidents should be tackled proactively, emphasizing on prevention rather than attempting to better deal with the after effects. Hence, we devised an innovative solution for this problem, which at its core is utilizing machine learning to predict routes/streets that are likely to be dangerous, and advises you on which route to take wherever you want to go.
## What it does
Leveraging AI technology, RouteSafer provides safer alternatives to Google Map routes and aims to reduce automotive collisions in cities. Using machine learning algorithms such as k-nearest neighbours, RouteSafer analyzes over 20 years of collision data and uses over 11 parameters to make an intelligent estimate about the safety of a route, and ensure the user arrives safe.
## How I built it
The path to implement RouteSafer starts with developing rough architecture that shows different modules of the project being independently built and at the same time being able to interact with each other in an efficient way. We divided the project into 3 different segments of UI, Backend and AI handled by Sherley, Shane & Hanz and Tanvir respectively.
The product leverages extensive API usage for different and diverse purposes including Google map API, AWS API and Kaggle API. Technologies involve React.js for front end, Flask for web services and Python for Machine Learning along with AWS to deploy it on the cloud.
The dataset ‘KSI’ was downloaded from Kaggle and has records from 2014 to 2018 on major accidents that took place in the city of Toronto. The dataset required a good amount of preprocessing because of its inconsistency, the techniques involving OneHotEncoder, Dimensionality reduction, Filling null or None values and also data featuring. This made sure that the data is consistent for all future challenges.
The Machine Learning usage gives the project a smart way to solve our problem, the use of K-Means clustering gave our dataset the feature to extract the risk level while driving on a particular street. The Google API feature retrieves different routes and the model helps to give it a risk feature hence making your travel route even safer.
## Challenges I ran into
One of the first challenges that we ran into as a team was learning how to properly integrate the Google Maps API polyline, and accurately converting the compressed string into numerical values expressing longitudes and latitudes. We finally solved this first challenge through lots of research, and even more stackoverflow searches :)
Furthermore, another challenge we ran into was the implementation/construction of our machine learning based REST API, as there were many different parts/models that we had to "glue" together, whether it was through http POST and GET requests, or some other form of communication protocol.
We faced many challenges throughout these two days, but we were able to push through thanks to the help of the mentors and lots of caffeine!
## Accomplishments that I'm proud of
The thing that we were most proud of was the fact that we reached all of our initial expectations, and beyond with regards to the product build. At the end of the two days we were left with a deployable product, that had gone through end to end testing and was ready for production. Given the limited time for development, we were very pleased with our performance and the resulting project we built. We were especially proud when we tested the service, and found that the results matched our intuition.
## What I learned
Working on RouteSafer has helped each one of us gain soft skills and technical skills. Some of us had no prior experience with technologies on our stack and working together helped to share the knowledge like the use of React.js and Machine Learning. The guidance provided through HackThe6ix gave us all insights to the big and great world of cloud computing with two of the world's largest cloud computing service onsite at the hackathon. Apart from technical skills, leveraging the skill of team work and communication was something we all benefitted from, and something we will definitely need in the future.
## What's next for RouteSafer
Moving forward we see RouteSafer expanding to other large cities like New York, Boston, and Vancouver. Car accidents are a pressing issue in all metropolitan areas, and we want RouteSafer there to prevent them. If one day RouteSafer could be fully integrated into Google Maps, and could be provided on any global route, our goal would be achieved.
In addition, we aim to expand our coverage by using Google Places data alongside collision data collected by various police forces. Google Places data will further enhance our model and allow us to better serve our customers.
Finally, we see RouteSafer partnering with a number of large insurance companies that would like to use the service to better protect their customers, provide lower premiums, and cut costs on claims. Partnering with a large insurance company would also give RouteSafer the ability to train and vastly improve its model.
To summarize, we want RouteSafer to grow and keep drivers safe across the Globe! | partial |
## Inspiration
Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID.
## What it does
Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy.
## How I built it
We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud.
## Challenges I ran into
One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful.
We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK.
## Accomplishments that I'm proud of
We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is.
Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times.
## What I learned
Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL.
For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend.
## What's next for Potluck
For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well. | ## Inspiration
A couple of weeks ago, 3 of us met up at a new Italian restaurant and we started going over the menu. It became very clear to us that there were a lot of options, but also a lot of them didn't match our dietary requirements. And so, we though of Easy Eats, a solution that analyzes the menu for you, to show you what options are available to you without the dissapointment.
## What it does
You first start by signing up to our service through the web app, set your preferences and link your phone number. Then, any time you're out (or even if you're deciding on a place to go) just pull up the Easy Eats contact and send a picture of the menu via text - No internet required!
Easy Eats then does the hard work of going through the menu and comparing the items with your preferences, and highlights options that it thinks you would like, dislike and love!
It then returns the menu to you, and saves you time when deciding your next meal.
Even if you don't have any dietary restricitons, by sharing your preferences Easy Eats will learn what foods you like and suggest better meals and restaurants.
## How we built it
The heart of Easy Eats lies on the Google Cloud Platform (GCP), and the soul is offered by Twilio.
The user interacts with Twilio's APIs by sending and recieving messages, Twilio also initiates some of the API calls that are directed to GCP through Twilio's serverless functions. The user can also interact with Easy Eats through Twilio's chat function or REST APIs that connect to the front end.
In the background, Easy Eats uses Firestore to store user information, and Cloud Storage buckets to store all images+links sent to the platform. From there the images/PDFs are parsed using either the OCR engine or Vision AI API (OCR works better with PDFs whereas Vision AI is more accurate when used on images). Then, the data is passed through the NLP engine (customized for food) to find synonym for popular dietary restrictions (such as Pork byproducts: Salami, Ham, ...).
Finally, App Engine glues everything together by hosting the frontend and the backend on its servers.
## Challenges we ran into
This was the first hackathon for a couple of us, but also the first time for any of us to use Twilio. That proved a little hard to work with as we misunderstood the difference between Twilio Serverless Functions and the Twilio SDK for use on an express server. We ended up getting lost in the wrong documentation, scratching our heads for hours until we were able to fix the API calls.
Further, with so many moving parts a few of the integrations were very difficult to work with, especially when having to re-download + reupload files, taking valuable time from the end user.
## Accomplishments that we're proud of
Overall we built a solid system that connects Twilio, GCP, a back end, Front end and a database and provides a seamless experience. There is no dependency on the user either, they just send a text message from any device and the system does the work.
It's also special to us as we personally found it hard to find good restaurants that match our dietary restrictions, it also made us realize just how many foods have different names that one would normally google.
## What's next for Easy Eats
We plan on continuing development by suggesting local restaurants that are well suited for the end user. This would also allow us to monetize the platform by giving paid-priority to some restaurants.
There's also a lot to be improved in terms of code efficiency (I think we have O(n4) in one of the functions ahah...) to make this a smoother experience.
Easy Eats will change restaurant dining as we know it. Easy Eats will expand its services and continue to make life easier for people, looking to provide local suggestions based on your preference. | ## Inspiration
STEM was inspired by our group members, who all experienced failure of a personal health goal. We believed that setting similar goals with our friends and seeing their progress will indeed inspire us to work harder towards completing our goal. We also agreed that this may encourage us to start challenges that we see our friends partaking in, which can help us develop healthy lifestyle habits.
## What it does
STEM provides a space where users can set their health goals in the form of challenges and visualize their progress in the form of a growing tree. Additionally, users can see others' progress within the same challenges to further motivate them. User's can help promote fitness and health by creating their own challenges and inviting their friends, family, and colleagues.
## How we built it
This mobile application was built with react-native, expo CLI and firebase.
## Challenges we ran into
One challenge that we ran into was the time limit. There was a few parts of our project that we designed on Figma and intended to code, but we were unable to do so. Furthermore, each of our group members had no prior experience using react-native which in combination with the time limit, lead to some planned features being undeveloped. Another challenge faced was the fact that our project is a very simple idea with a lot of competition.
## Accomplishments that we're proud of
We are very proud of our UI and the aesthetics of our project. Each member of our group members had no prior experience with react native, and therefore, we are proud that we were able to build and submit a functional project within 36 hours. Lastly, we are also very proud that we were able to develop an idea with potential to be a future business.
## What we learned
Throughout this weekend, we learned how to be more consistent with version control, in order to work better and faster as a team. We also learned how to build an effective NOSQL database schema.
## What's next for STEM
As we all believe that STEM has potential to be a future business, we will continue developing the code, and deploy. We will be adding a live feed page that will allow you to see, like and comment on friends' posts. Users will be able to post about their progress in challenges. STEM will also reach out and try to partner with companies to create incentives for certain achievements made by users. (E.g. getting a discount on certain sportswear brands after completing a physical challenge or certain tree level) | winning |
## Inspiration
GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers!
## What it does
The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which
we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not.
## How we built it
We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs.
## Challenges we ran into
For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon.
## Accomplishments that we're proud of
Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of.
## What we learned
We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord.
## What's next for Geodude?
Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location. | ## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues | ## Inspiration
As Engineering students, we tend to spend a lot of our time outside classes in the library working away on assignments. In the days following, we might even brag about how much time we spent working, and how much we were able to get done. However, up until now, there wasn't any concrete way to show this and compare to one another in this sudo competition.
## What it does
We created a web app that automatically tracks how long you spend studying in the library so that you can compete with friends and even make it to the top of the Study Spot leaderboard!
## How we built it
Your location is automatically detected using the Google Nearby Places API which biases its results to detect if a library is within a set radius from you. Then once your timer has started, it tracks your score and how long you've been studying for. Once you stop studying (by clicking the "stop studying" button or simply leaving the location), the score is then sent to the back-end database. These scores are compared and ultimately culminated into a leaderboard screen.
## Challenges we ran into
A lot of the team did not have previous experience with APIs. So naturally, we ran into a couple issue when working with the Verbwire API. We were, however, able to find a workaround to be able to ensure that our code is how we hoped!
## What we learned
Colin - Learning the basics of backend development and API calls
Jinwoo - Getting familiar with React.js
Jingyue - Front-end design (tailwind.css), React.js
Yax - How to use Firebase and Next.js authentication
## What's next for Study Spot
We have a couple of ideas for the development of study spot. The first one being to instead of using the NFT mints individually and displaying them, we hope to mint unique NFTs for each individual user. Another idea we hope to implement is a "search" feature, which can help the user navigate to the nearest library, which is particularly useful when working in a new location / area you might now know very well. | winning |
## Triangularizer
Takes your simple mp4 videos and makes them hella rad! | ## Inspiration
Growing up, our team members struggled with gaming addictions and how games affected our mind. Solutions that could help detect, document, and provide solutions for how games affected our mental health were poorly designed and few and far between. We Created this app as an aide for our own struggles, and with the hope that it could help others who share the same struggles as we did. Building on top of that though, we also wanted to increase our performance in competitive games and taste more of that sweet, sweet victory, Managing your temper and mood is a large contributor to competitive game performance.
## What it does
Shoebill is a web application that tracks, detects, documents and provides solutions to gaming addiction. It's analytics are able to track your mental stability, mood and emotions while you game. It can help you discover trends in your temperament and learn what kind of gamer you are. Using Shoebill, we can learn what tilts you the most — and optimize and grow as a competitive gamer.
## How is it built
React.js is the main framework on which we built our web application, supported by a Flask Python backend. We integrated Hume’s API to detect emotions related to game addiction and rage behaviour. We also tried integrating the Zepp watch to get user's health data for analysis.
## Challenges we ran into
Navigating the React.js framework was a challenge for us as most of our team was unfamiliar with the framework. Integrating Flask with React was also difficult, as these two frameworks are implemented fairly differently when together versus when separately implemented.
More, initially, we experimented with pure HTML and CSS for the frontend, but then realized that React JS files would make the web app more dynamic and easier to integrate, so we had to switch halfway. We also had to pivot to Vite, since we ran into root bugs with the deprecated Create React App framework we initially used.
Also, navigating through the APIs were more difficult than usual due to unclear documentation. Notably, since there are so many hackers in the same venue, the provided wifi speed was exceptionally slow which decelerated our progress.
## Accomplishments that we're proud of
Despite the challenges, our APIs have seamlessly integrated and in the last few hours we were able to piece together the backend and frontend - that were once only working separately - and make it a functioning web app. Our logo is also sleek and minimalistic, reflecting the professional nature of Shoebill. Equally important, we've formed stronger bonds with our teammates through collaboration and support, reaching for success together.
## What we learned
We were able to learn how React.js and Flask worked together, and understand the fundamental functionalities of Git. We also learned the importance of optimizing our ideation phase.
We learned that frequent contributions to GitHub is vital, in that it is a fundamental aspect of project management and version control. Furthermore, we understand the significance of collaboration among team members, especially constant and effective communication. Lastly, we gained a deeper understanding of API integration.
## What's next for Shoebill
Moving forward, we wish to integrate 3D graph visualizations with dynamic elements. We want to evolve from the hackathon project into a fully-fledged product, involving: the incorporation of user profiles, an integrated Discord bot, and extracting more health data from the Zepp watch (such as blood oxygen levels). | ## Inspiration
One of our team members, Aditya, has been in physical therapy (PT) for the last year after a wrist injury on the tennis court. He describes his experience with PT as expensive and inconvenient. Every session meant a long drive across town, followed by an hour of therapy and then the journey back home. On days he was sick or traveling, he would have to miss his PT sessions.
Another team member, Adarsh, saw his mom rushed to the hospital after suffering from a third degree heart block. In the aftermath of her surgery, in which she was fitted with a pacemaker, he noticed how her vital signs monitors, which were supposed to aid in her recovery, inhibited her movement and impacted her mental health.
These insights together provided us with the inspiration to create TherapEase.ai. TherapEase.ai uses AI-enabled telehealth to bring **affordable and effective PT** and **contactless vital signs monitoring services** to consumers, especially among the **elderly and disabled communities**. With virtual sessions, individuals can receive effective medical care from home with the power of pose correction technology and built-in heart rate, respiratory, and Sp02 monitoring. This evolution of telehealth flips the traditional narrative of physical development—the trainee can be in more control of their body positioning, granting them greater levels of autonomy.
## What it does
The application consists of the following features:
Pose Detection and Similarity Tracking
Contactless Vital Signs Monitoring
Live Video Feed with Trainer
Live Assistant Trainer Chatbot
Once a PT Trainer or Medical Assistant creates a specific training room, the user is free to join said room. Immediately, the user’s body positioning will be highlighted and compared to that of the trainer. This way the user can directly mimic the actions of the trainer and use visual stimuli to better correct their position. Once the trainer and the trainee are aligned, the body position highlights will turn blue, indicating the correct orientation has been achieved.
The application also includes a live assistant trainer chatbot to provide useful tips for the user, especially when the user would like to exercise without the presence of the trainer.
Finally, on the side of the video call, the user can monitor their major vital signs: heart rate, respiratory rate, and blood oxygen levels without the need for any physical sensors or wearable devices. All three are estimated using remote Photoplethysmography: a technique in which fluctuations in camera color levels are used to predict physiological markers.
## How we built it
We began first with heart rate detection. The remote Photoplethysmography (rPPG) technique at a high level works by analyzing the amount of green light that gets absorbed by the face of the trainee. This serves as a useful proxy as when the heart is expanded, there is less red blood in the face, which means there is less green light absorption. The opposite is true when the heart is contracted. By magnifying these fluctuations using Eulerian Video Magnification, we can then isolate the heart rate by applying a Fast Fourier Transform on the green signal.
Once the heart rate detection software was developed, we integrated in PoseNet’s position estimation algorithm, which draws 17 key points on the trainee in the video feed. This lead to the development of two-way video communication using webRTC, which simulates the interaction between the trainer and the trainee. With the trainer’s and the trainee’s poses both being estimated, we built the weighted distance similarity comparison function of our application, which shows clearly when the user matched the position of the trainer.
At this stage, we then incorporated the final details of the application: the LLM assistant trainer and the additional vital signs detection algorithms. We integrated **Intel’s Prediction Guard**, into our chat bot to increase speed and robustness of the LLM. For respiratory rate and blood oxygen levels, we integrated algorithms that built off of rPPG technology to determine these two metrics.
## Challenges we ran into (and solved!)
We are particularly proud of being able to implement the two-way video communication that underlies the interaction between a patient and specialist on TherapEase.ai. There were many challenges associated with establishing this communication. We spent many hours building an understanding of webRTC, web sockets, and HTTP protocol. Our biggest ally in this process was the developer tools of Chrome, which we could use to analyze network traffic and ensure the right information is being sent.
We are also proud of the cosine similarity algorithm which we use to compare the body pose of a specialist/trainer with that of a patient. A big challenge associated with this was finding a way to prioritize certain points (from posnet) over others (e.g. an elbow joint should be given more importance than an eye point in determining how off two poses are from each other). After hours of mathematical and programming iteration, we devised an algorithm that was able to weight certain joints more than others leading to much more accurate results when comparing poses on the two way video stream. Another challenge was finding a way to efficiently compute and compare two pose vectors in real time (since we are dealing with a live video stream). Rather than having a data store, for this hackathon we compute our cosine similarity in the browser.
## What's next for TherapEase.ai
We all are very excited about the development of this application. In terms of future technical developments, we believe that the following next steps would take our application to the next level.
* Peak Enhancement for Respiratory Rate and SpO2
* Blood Pressure Contactless Detection
* Multi-channel video Calling
* Increasing Security | losing |
## Inspiration
The COVID-19 pandemic has changed the way we go about everyday errands and trips. Along with needing to plan around wait times, distance, and reviews for a location we may want to visit, we now also need to consider how many other people will be there and whether its even a safe establishment to visit. *Planwise helps us plan our trips better.*
## What it does
Planwise searches for the places around you that you want to visit and calculates a PlanScore that weighs the Google **reviews**, current **attendance** vs usual attendance, **visits**, and **wait times** so that locations that are rated highly, have few people currently visit them compared to their usual weekly attendance, and have low waiting times are rated highly. A location's PlanScore **changes by the hour** to give users the most up-to-date information about whether they should visit an establishment. Furthermore, PlanWise also **flags** common types of places that are prone to promoting the spread of COVID-19, but still allows users to search for them in case they need to visit them for **essential work**.
## How we built it
We built Planwise as a web app with Python, Flask, and HTML/CSS. We used the Google Places and Populartimes APIs to get and rank places.
## Challenges we ran into
The hardest challenges weren't technical - they had more to do with our *algorithm* and considering the factors of the pandemic. Should we penalize an essential grocery store for being busy? Should we even display results for gyms in counties which have enforced shutdowns on them? Calculating the PlanScore was tough because a lot of places didn't have some of the information needed. We also spent some time considering which factors to weigh more heavily in the score.
## Accomplishments that we are proud of
We're proud of being able to make an application that has actual use in our daily lives. Planwise makes our lives not just easier but **safer**.
## What we learned
We learned a lot about location data and what features are relevant when ranking search results.
## What's next for Planwise
We plan to further develop the web application and start a mobile version soon! We would like to further **localize** advisory flags on search results depending on the county. For example, if a county has strict lockdown, then Planwise should flag more types of places than the average county. | ## Inspiration
We got lost so many times inside MIT... And no one could help us :( No Google Maps, no Apple Maps, NO ONE. Since now, we always dreamed about the idea of a more precise navigation platform working inside buildings. And here it is. But that's not all: as traffic GPS usually do, we also want to avoid the big crowds that sometimes stand in corridors.
## What it does
Using just the pdf of the floor plans, it builds a digital map and creates the data structures needed to find the shortest path between two points, considering walls, stairs and even elevators. Moreover, using fictional crowd data, it avoids big crowds so that it is safer and faster to walk inside buildings.
## How we built it
Using k-means, we created nodes and clustered them using the elbow diminishing returns optimization. We obtained the hallways centers combining scikit-learn and filtering them applying k-means. Finally, we created the edges between nodes, simulated crowd hotspots and calculated the shortest path accordingly. Each wifi hotspot takes into account the number of devices connected to the internet to estimate the number of nearby people. This information allows us to weight some paths and penalize those with large nearby crowds.
A path can be searched on a website powered by Flask, where the corresponding result is shown.
## Challenges we ran into
At first, we didn't know which was the best approach to convert a pdf map to useful data.
The maps we worked with are taken from the MIT intranet and we are not allowed to share them, so our web app cannot be published as it uses those maps...
Furthermore, we had limited experience with Machine Learning and Computer Vision algorithms.
## Accomplishments that we're proud of
We're proud of having developed a useful application that can be employed by many people and can be extended automatically to any building thanks to our map recognition algorithms. Also, using real data from sensors (wifi hotspots or any other similar devices) to detect crowds and penalize nearby paths.
## What we learned
We learned more about Python, Flask, Computer Vision algorithms and Machine Learning. Also about frienship :)
## What's next for SmartPaths
The next steps would be honing the Machine Learning part and using real data from sensors. | ## Inspiration
Have you ever used a budgeting app? Do you set budgets for each month?
I use a budgeting app and when I check my weekly spending, I always realize I spent a $100 too much!
So, we decided to make something proactive rather than reactive that will help you immensely in your spending decisions.
## What it does
The app will alert you in real-time when you enter places like coffee shop, grocery store, restaurant, bar, etc. and tell you how much budget you have left for that paticular place type. It will tell you something like "You have $40 left in your Grocery Budget" or "You have $11 left in your Coffee Budget". This way you can make better spending decision.
## How we built it
We used Android platform and Google Places API to built the app.
## Challenges we ran into
It was hard to implement Google Places API automatically without the app running in the background. Our plan was to send real-time alerts to a wearable device but we did not have access to any, so we decided to use make the smart phone app instead.
## Accomplishments that we're proud of
Implementing Google Places API and quering data from Firebase in real-time using in-built GPS successfully without the use of Wi-Fi or Bluetooth.
## What we learned
We learned querying data from Firebase and learned how we can identify places around easily using Google Places API
## What's next for Budget Easy
We want to parter with a bank or budgeting app which tracks user's spending and integrate this proactive feature into their app! | winning |
## Inspiration
With caffeine being a staple in almost every student’s lifestyle, many are unaware when it comes to the amount of caffeine in their drinks. Although a small dose of caffeine increases one’s ability to concentrate, higher doses may be detrimental to physical and mental health. This inspired us to create The Perfect Blend, a platform that allows users to manage their daily caffeine intake, with the aim of preventing students from spiralling down a coffee addiction.
## What it does
The Perfect Blend tracks caffeine intake and calculates how long it takes to leave the body, ensuring that users do not consume more than the daily recommended amount of caffeine. Users can add drinks from the given options and it will update on the tracker. Moreover, The Perfect Blend educates users on how the quantity of caffeine affects their bodies with verified data and informative tier lists.
## How we built it
We used Figma to lay out the design of our website, then implemented it into Velo by Wix. The back-end of the website is coded using JavaScript. Our domain name was registered with domain.com.
## Challenges we ran into
This was our team’s first hackathon, so we decided to use Velo by Wix as a way to speed up the website building process; however, Wix only allows one person to edit at a time. This significantly decreased the efficiency of developing a website. In addition, Wix has building blocks and set templates making it more difficult for customization. Our team had no previous experience with JavaScript, which made the process more challenging.
## Accomplishments that we're proud of
This hackathon allowed us to ameliorate our web design abilities and further improve our coding skills. As first time hackers, we are extremely proud of our final product. We developed a functioning website from scratch in 36 hours!
## What we learned
We learned how to lay out and use colours and shapes on Figma. This helped us a lot while designing our website. We discovered several convenient functionalities that Velo by Wix provides, which strengthened the final product. We learned how to customize the back-end development with a new coding language, JavaScript.
## What's next for The Perfect Blend
Our team plans to add many more coffee types and caffeinated drinks, ranging from teas to energy drinks. We would also like to implement more features, such as saving the tracker progress to compare days and producing weekly charts. | ## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time. | ## Inspiration
We needed a system to operate our Hackspace utilities shop, where we sell materials, snacks, and even the use of some restricted equipment. It needed to be instantaneous, allow for a small amount of debt in case of emergency and work using our College ID cards to keep track of who is purchasing what.
## What it does
Each student has a set credit assigned to them that they may spend in our shop's products. To start the transaction they must tap their college ID onto the RFID scanner and, after being able to check their current credit, they can scan the barcode of the product they want to buy. If this transaction would leave them with less than £5 of debt, the may scan more items or proceed to checkout. Their credit can be topped up through our College Union website which will, in turn, update our database with their new credit amount.
## How we built it
The interface is built in bootstrap-generated webpages (html) that we controlled with python, and these are locally hosted.
We hosted all of our databases on firebase, accessing them through the firebase API with a python wrapper.
## Challenges we ran into
Connecting the database with the GUI without the python program crashing took the majority of our debugging time, but getting it to work in the end was one of the most incredible moments of the weekend.
## Accomplishments that we're proud of
We've never made a webapp before, and we have been pleasantly surprised with how well it turned out: it was clean, easy to use and modular, making it easy to update and develop.
Instead of using technology we wouldn't have available back in London and doing a project with no real future outlook or development, we chose to tackle a problem which we actually needed to solve and whose solution we will be able to use many times in the future. This has meant that competing the Hackathon will have a real impact in the every day transactions that happen in our lab.
We're also very proud of developing the database in a system which we knew nothing about at the beginning of this event: firebase. It was challenging, but the final result was as great as we expected.
## What we learned
During the Hackathon, we have improved our coding skills, teamworking, database management, GUI development and many other skills which we will be also able to use in our future projects and careers.
## What's next for ICRS Checkout
Because we have concentrated on a specific range of useful tasks for this Hackathon, we would love to be able to develop a more general system that can be used by different societies, universities and even schools; operated under the same principles but with a wider range of card identification possibilities, products and debt allowance. | winning |
## Inspiration
During the day, universities around the world feature the brightest minds, most challenging ideas, and greatest opportunities to grow as both a person and an intellectual. At night, however, such campuses can feel nearly foreign or frightening as the streets lack the normal bustle of students and the light grows dim. Universities attempt to alleviate students' anxiety and worries of safety by offering a variety of services; at Cal, we have our system of Night Safety Shuttles and BearWalk staff that accompany students. Yet, these services tend to have extraordinary wait times, sometimes exceeding two hours. Enter GetHome.
## What it does
GetHome connects verified students with one another such that no user has to walk home alone. By utilizing the pathing and geolocation of Google Maps, as well as the information gathering and communication offered by Cisco Meraki and Spark respectively, GetHome quickly pairs two users headed in a similar direction and provides a path such that they both get to their desired location with a minimal amount of safety risk. In addition, GetHome uses Cisco applications for data analysis and tracking to virtually accompany pairs as they make their way home: access points can ensure that users are on the correct path, and a chat-bot can double-check users have successfully gotten home.
## How we built it
As a webapp, we utilized HTML5 and CSS3 to create a clean and precise landing page, with one redirect included for when a user lines up in queue. The basis of our working code is Javascript, which we used to interface with the various APIs allowing for accurate tracking and pathing of paired users.
Using Google Graphs and Maps, we generated formulas to calculate and accurately map distances between all users as well as the distances between their respective destinations, while simultaneously displaying such information in easy-to-read graphical elements. A combined integration of Cisco Spark and Cisco Meraki, done via creation of Spark bots and information gathering with AWS Lambda, generates the heatmap for users to find their partner as well as open an avenue of communication between the two. Obscured from the user, we also rely on Meraki's Real Time Location Services (RTLS) to track whether individuals are following the anticipated path home; deviations are viewed as hazardous and can be handled by our companion-bot, which checks in with users to see if they're okay.
## Challenges we ran into
Utilizing the REST API efficiently was difficult, seeing as how none of our team members had previous experience working with HTTP POST requests. In addition, working with the servers provided for accessing real-time data of our immediate location proved difficult, as some connection errors and faulty permissions prevented us from dedicating our full effort towards completely understanding the use cases of the functions and services provided alongside the data.
## Accomplishments that we're proud of
As a team, we are proud of creating our first ever Spark bot and formulating conclusions based on real-time data via Meraki's Dashboard. Meshing together multiple APIs can become messy at times, so we are extremely proud of our clean implementations that serve to precisely and efficiently merge the varied applications we delved into.
## What we learned
Throughout this hacking process, we learned a great deal about how to work with Node-RED for Cisco Meraki and Spark queries, as well as how to integrate such applications with services like AWS Lambda to properly retrieve points of interest, such as MAC addresses for tracking. The less-experienced of our team also learned how to utilize APIs in a very general sense within HTML and Javascript; as a whole, we built upon past experiences to further increase our efficiency and teamwork in regards to formulating ideas and bringing them to manifestation through research and dedicated work.
## What's next for GetHome
Meraki's ability to precisely locate devices through triangulation by access points as well as the flexibility of heatmaps allows for the possibility of creating paths that avoid high-risk areas; this would be a further step in preventative safety, reaching beyond what our application currently utilizes Meraki's location services for. Combined, having Meraki anticipate the next access point the user should be pinged at as well as formulating the path so it is not only the shortest path, but also the safest, would be extremely beneficial for GetHome and its users. | ## Inspiration
We hear stories about crimes happening around us everyday. It's especially terrifying since we're students and we just want to enjoy our college experience. While discussing ideas to hack, we realized this problem **resonated with all of us**. We immediately realized that we need to work on this.
## What it does
It gives you and your loved ones a chance to take your safety into your own hands. By using our application, you are taking control of the most important thing you have, your life. It generates the safest and shortest path from Location A to Location B.
## How we built it
We analyzed large data sets of crime over the past 10 years, picked up representative points after weighing them based on relevance and severity of crime. We then created a data anaytics flow (that can be easily repeated with new data) to glean more useful insights from the raw data. To help analyze the data further we plotted the data on Virtual Reality to look at data spreads and densities. This processed data was than pushed into our MongoDB database hosted on MLab. We then used these generalized clusters to see which areas we should avoid while routing using the Wrld and OSRM API's. From here our app generates a single route off of all the information based on what our algorithm devises to be the "safest route" for the user. This route is then plotted and displayed for the users convenience.
## Challenges we ran into
We started out with millions of cells of data, which made it really hard to work with. We had to filter out the most relevant data first and convert the data into a completely new format to optimize it for our path finding algorithm. In addition, we had to optimize our MongoDB to work well with our data as we made frequent queries to a database with originally over a million elements. In, addition, we constantly had issues mixing up latitude and longitude values as well as making HTTP requests for many APIs.
## Accomplishments that we're proud of
We are happy that we were able to create a minimum viable product in this short duration. We're especially glad it's not just a "weekend-only" idea and it is something we can continue to make better. This idea is something we hope in the future can be something that truly has a social impact and can actually make the world a safer place.
## What we learned
* Wrld/OSRM API
* Node/Express/MongoDB
-HTTP Requests/Callback functions
-Unity VR sdk
-Pandas/Numpy
* Coordinating team efforts in different parts of the Application
## What's next for SafeWorld
Using real time data to better predictions for real life incidents (protests). In addition, having more global data sets would be a optimal next step to get it working in more cities. This would be expedited by our adaptable framework for data generation and pathing. | ## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call. | partial |
## Inspiration
Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application.
## What it does
InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations.
## How I built it
In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API.
The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS.
## Challenges I ran into
"Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly.
## Accomplishments that I'm proud of
I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully.
## What I learned
I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch.
## What's next for InterPrep
I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project! | ## Inspiration
The amount of data in the world today is mind-boggling. We are generating 2.5 quintillion bytes of data every day at our current pace, but the pace is only accelerating with the growth of IoT.
We felt that the world was missing a smart find-feature for videos. To unlock heaps of important data from videos, we decided on implementing an innovative and accessible solution to give everyone the ability to access important and relevant data from videos.
## What it does
CTRL-F is a web application implementing computer vision and natural-language-processing to determine the most relevant parts of a video based on keyword search and automatically produce accurate transcripts with punctuation.
## How we built it
We leveraged the MEVN stack (MongoDB, Express.js, Vue.js, and Node.js) as our development framework, integrated multiple machine learning/artificial intelligence techniques provided by industry leaders shaped by our own neural networks and algorithms to provide the most efficient and accurate solutions.
We perform key-word matching and search result ranking with results from both speech-to-text and computer vision analysis. To produce accurate and realistic transcripts, we used natural-language-processing to produce phrases with accurate punctuation.
We used Vue to create our front-end and MongoDB to host our database. We implemented both IBM Watson's speech-to-text API and Google's Computer Vision API along with our own algorithms to perform solid key-word matching.
## Challenges we ran into
Trying to implement both Watson's API and Google's Computer Vision API proved to have many challenges. We originally wanted to host our project on Google Cloud's platform, but with many barriers that we ran into, we decided to create a RESTful API instead.
Due to the number of new technologies that we were figuring out, it caused us to face sleep deprivation. However, staying up for way longer than you're supposed to be is the best way to increase your rate of errors and bugs.
## Accomplishments that we're proud of
* Implementation of natural-language-processing to automatically determine punctuation between words.
* Utilizing both computer vision and speech-to-text technologies along with our own rank-matching system to determine the most relevant parts of the video.
## What we learned
* Learning a new development framework a few hours before a submission deadline is not the best decision to make.
* Having a set scope and specification early-on in the project was beneficial to our team.
## What's next for CTRL-F
* Expansion of the product into many other uses (professional education, automate information extraction, cooking videos, and implementations are endless)
* The launch of a new mobile application
* Implementation of a Machine Learning model to let CTRL-F learn from its correct/incorrect predictions | ## Inspiration
With our inability to acquire an oculus rift at YHack, we were looking forward to using one in any implementation. As we are entering the job market soon, in search of summer internships, we thought about how many people, students in particular, do not have sufficient interviewing experience. Thus, our idea was to use VR to provide these people with the ability to train and succeed with interviews.
## What it does
This hack simulates an interview scenario to aid in the practice of interviewing skills and the removal of improper speech patterns using Unity, Oculus and IBM Watson.
## How I built it
Using Unity, starting and interview scenes were created and animations were implemented. The backend of the speech processing system was built with IBM Watson Unity tools. These tools allowed for the integration for IBM speech to text api's and allow the user's speech to be converted into text and later processed for the output.
## Challenges I ran into
While implementing IBM Watson speech to text, Oculus Rift compatibility, and Unity Scenes, we came across a few errors. Firstly, we had been working in different versions of unity, so when the time came to compile our projects together there were compatibility issues.
## Accomplishments that I'm proud of
One things we are proud of how we overcame the many challenges that arose during this project. We were also proud of the overall implementation of the different facets of the hack and how we were able to mesh them all together.
## What I learned
All of us had to learn how to use and implement virtual reality as none of us had the opportunity to work with it before. We also learned a lot about Unity and implementing items such as VR and IBM Watson speech to text.
## What's next for InterVR
Implementing more thorough testing on the speech to text file would be the first large improvement that could be made for InterVR. Another improvement that could be made for InterVR is a more diverse cast of interviewers, additional scenes for the interviews, and more questions to allow the user to have a more specialized experience. | winning |
## Inspiration: We decided to try out chrome extensions.
## What it does: Creates a new tab with a screen that looks like the user is working.
## How I built it: I used java script and json to build this.
## Challenges I ran into: Having the screen loop such that each click would produce a different screen.
## Accomplishments that I'm proud of: Finishing the project.
## What I learned: It is the process that matters.
## What's next for Pretending to Work: Pretending to be cool? | ## Inspiration
Luke had this idea at 2AM at night inspired from an idea he saw that had entrepreneurs able to share anything they thought was useful to their success, or something they used everyday. For example, Mark Cuban being able to share quotes he looked at everyday. We spun this to have anybody, not just rich business men, have a place to store quotes, code examples, stories, classes, or anything else they use daily or think could be helpful later.
## What it does
If you have a junk drawer at home think of it like that. A typical junk drawer has batteries, screwdrivers, tape, light bulbs, etc. This is like that except for anything on the internet. Our Chrome Extension takes a screenshot of any page you want and store it in your personal gallery. From there you can access that drawer and see everything you have thrown in there. This app takes away the stress and hardships of having to label anything you want to store away in a folder. This way you have a general place where anything useful can go.
## How we built it
We built this app with no prior knowledge of Chrome Extensions. We used tutorials to hone our skills quickly and produced an Extension that can take screenshots + scrape urls and store them locally. We also used our React JS skills to make a website that can link to both the Chrome Extension and your personal gallery. The React JS website was by far the easiest part. In short we used React JS and a locally stored Chrome Extension to make this website.
## Challenges we ran into
Our first major problem was our idea. We could not agree on what we wanted our idea to look like. After coding for a little bit we decided to stick with screenshots alone instead of trying to form it into a JSON file or anything more complicated. After that it was simple coding problems that we ran into. Next problem was how we are going to store the screenshots taken by the extension. We tried firebase and other cloud storage. However, it was late on May 14th when Luke texted me that he just remembered local storage, and we never looked back from there.
## Accomplishments that we're proud of
I am proud of Luke as he was the main mastermind behind the project and figuring out how to use and create a Chrome Extension. He started the project with zero knowledge on the subject matter, and within two days had a deep understanding of what he needed to do. I am proud of myself because as the more novice programmer in the family I have not finished a a project with Luke before and I think this is a huge step forward for me and us working together.
## What we learned
We learned loads about how Chrome Extensions work. Probably more than I will ever need to know again. I learned more about proper professional code formatting. Luke had to help me with that, but I now feel much better about how I should format my code especially in React. We learned each others strengths and weaknesses and are excited to work together in more competitions in the future. With that being said we also learned not to bite off more than we can chew. With our main problem being finding an idea we had less time to code than is desirable. Though we are both happy with the final product we wish we had more time, and hope to continue producing this until completion.
## What's next for Junk Drawer
The next steps are mostly polishing. As of writing this the gallery part of the web app is not finished, and the website buttons do not link to anything. So finishing those two features are important and general polishing/making things look pretty is the next steps for us and Junk Drawer. | ## Inspiration
**Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing**
## What it does
The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood.
## How we built it
We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive.
## Challenges we ran into
Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate.
Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises.
## Accomplishments that we're proud of
In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression.
## What we learned
Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site.
## What's next for Act Now
We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities. | losing |
## Inspiration
One of the biggest challenges faced by families in war effected countries was receiving financial support from their family members abroad. High transaction fees, lack of alternatives and a lack of transparency all contributed to this problem, leaving families struggling to make ends meet.
According to the World Bank, the **average cost of sending remittances to low income countries is a striking 7% of the amount sent**. For conflict affected families, a 7% transaction fee means the difference between putting food on the table or going hungry for days. The truth is that the livelihoods of those left behind vitally depend on remittance transfers. Remittances are of central importance for restoring stability for families in post-conflict countries. At Dispatch, we are committed to changing the lives of war stricken communities. Our novel app allows families to receive money from their loved ones, without having to worry about the financial barriers that had previously stood in their area.
However, the problem is far larger. Economically, over **$20 billion** has been sent back and forth in the United States this year, and we are barely even two months in. There are more than 89 million migrants in the United States itself. In a hugely untapped market that cares little about its customers and is dominated by exploitative financial institutions, we provide the go-to technology-empowered alternative that lets users help their families and friends around the world. We provide a globalized, one-stop shop for sending money across the world.
*Simply put, we are the iPhone of a remittance industry that uses landlines.*
## What problems exist
1. **High cost, mistrust and inefficiency**: Traditional remittance services often charge high fees for their services, which significantly reduces the amount of money that the recipient receives. **A report by the International Fund for Agricultural Development (IFAD) found that high costs of remittance lead to a loss of $25 billion every year for developing countries**. Additionally, they don’t provide clear information on exchange rate and fees, which leads to mistrust among users. Remittance services tend to have an upper limit on how much one can send per transaction, and they end up leading to security issues once money has been sent over. Lastly, these agencies take days to acknowledge, process, and implement a certain transaction, making immediate transfers intractable.
2. **Zero alternatives = exploitation**: It’s also important to note that very few traditional remittance services are offered in countries affected by war. Remittance services tend not to operate in these regions. With extremely limited options, families are left with no option but to accept the high fees and poor exchange rates by these agencies. This isn’t unique to war stricken countries. This is a huge problem in developing countries. Due to the high fees associated with traditional remittance services, many families in developing countries are unable to fully rely on remittance alone to support themselves. As a result, they may turn to alternative financial options that can be exploitative and dangerous. One such alternative is the use of loan sharks, who offer quick loans with exorbitant interest rates, often trapping borrowers in a cycle of debt.
## How we improve the status quo
**We are a mobile application that provides a low-cost, transparent and safe way to remit money. With every transaction made through Dispatch, our users are making a tangible difference in the lives of their loved ones.**
1. **ZERO Transaction fees**: Instead of charging a percentage-based commission fee, we charge a subscription fee per month. This has a number of advantages. Foremost, it offers a cost effective solution for families because it remains the same regardless of the transfer amount. This also makes the process transparent and simpler as the total cost of the transaction is clear upfront.
2. **Simplifying the process**: Due to the complexity of the current remittance process, migrants may find themselves vulnerable to exploitative offers from alternative providers. This is because they don’t understand the details and risks associated with these alternatives. On our app, we provide clear and concise information that guides users through the entire process. A big way of simplifying the process is to provide multilingual support. This not only removes barriers for immigrants, but also allows them to fully understand what’s happening without being taken advantage of.
3. **Transparency & Security**
* Clearly stated and understood fees and exchange rates - no hidden fees
* Real-time exchange rate updates
* Remittance tracker
* Detailed transaction receipts
* Secure user data (Users can only pay when requested to)
4. **Instant notifications and Auto-Payment**
* Reminders for bill payments and insurance renewals
* Can auto-pay bills (will require confirmation each time before its done) so the user remains worry-
free and does not require an external calendar to manage finances
* Notifications for when new requests have been made by the remitter
## How we built it
1. **Backend**
* Our backend is built on an intricate [relational database](http://shorturl.at/fJTX2) between users, their transactions and the 170 currencies and their exchange rates
* We use the robust Checkbook API as the framework to make payments and keep track of the invoices of all payments run through Dispatch
2. **Frontend**
* We used the handy and intuitive Retool environment to develop a rudimentary app prototype, as demonstrated in our [video demo](https://youtu.be/rNj2Ts6ghgA)
* It implements most of the core functionality of our app and makes use of our functional MySQL database to create a working app
* The Figma designs represent our vision of what the end product UI would look like
## Challenges we ran into
1. International money transfer regulations
2. Government restrictions on currencies /embargos
3. Losing money initially with our business model
## Accomplishments that we're proud of
1. Develop an idea with immense social potential
2. Integrating different APIs into one comprehensive user interface
3. Coming from a grand total of no hackathon experience, we were able to build a functioning prototype of our application.
4. Team bonding – jamming to Bollywood music
## What we learned
1. How to use Retool and Checkbook APIs
2. How to deploy a full fledged mobile application
3. How to use MySQL
4. Understanding the challenges faced by migrants
5. Gained insight into how fintech can solve social issues
## What's next for Dispatch
The primary goal of Dispatch is to empower war-affected families by providing them with a cost-effective and reliable way to receive funds from their loved ones living abroad. However, our vision extends beyond this demographic, as we believe that everyone should have access to an affordable, safe, and simple way to send money abroad.
We hope to continuously innovate and improve our app. We hope to utilize blockchain technology to make transactions more secure by providing a decentralized and tamper proof ledger. By leveraging emerging technologies such as blockchain, we aim to create a cutting-edge platform that offers the highest level of security, transparency and efficiency.
Ultimately, our goal is to create a world where sending money abroad is simple, affordable, and accessible to everyone. **Through our commitment to innovation, transparency, and customer-centricity, we believe that we can achieve this vision and make a positive impact on the lives of millions of people worldwide.**
## Ethics
Banks are structurally disincentivized to help make payments seamless for migrants. We read through various research reports, with Global Migration Group’s 2013 Report on the “Exploitation and abuse of international migrants, particularly those in an irregular situation: a human rights approach” to further understand the violation of present ethical constructs.
As an example, consider how bad a 3% transaction fees (using any traditional banking service) can be for an Indian student whose parents pay Stanford tuition -
3 % of $ 82, 162 = $ 2464.86 (USD)
= 204,005.37 (INR) [1 USD = 82.07 INR]
That is, it costs an extra 200,000 Indian rupees for a family that pays Stanford students via a traditional banking service. Consider the fact that, out of 1.4 billion Indians, this is greater than the average annual income for an Indian. Just the transaction fees alone can devastate a home.
Clearly, we don’t destroy homes, hearts, or families. We build them, for everyone without exception.
We considered the current ethical issues that arise with traditional banking or online payment systems. The following ethical issues arise with creating exclusive, expensive, and exploitative payment services for international transfers:
1. Banks earn significant revenue from remittance payments, and any effort to make the process more seamless could potentially reduce their profits.
2. Banks may view migrant populations as a high-risk group for financial fraud, leading them to prioritize security over convenience in remittance payments
3. Remittance payments are often made to developing countries with less developed financial infrastructure, making it more difficult and costly for banks to facilitate these transactions
4. Many banks are large, bureaucratic organizations that may not be agile enough to implement new technologies or processes that could streamline remittance payments.
5. Banks may be more focused on attracting higher-value customers with more complex financial needs, rather than catering to the needs of lower-income migrants.
6. The regulatory environment surrounding remittance payments can be complex and burdensome, discouraging banks from investing in this area.
7. Banks do not have a strong incentive to compete on price in the remittance market, since many migrants are willing to pay high fees to ensure their money reaches its intended recipient.
8. Banks may not have sufficient data on the needs and preferences of migrant populations, making it difficult for them to design effective remittance products and services.
9. Banks may not see remittance payments as a strategic priority, given that they are only a small part of their overall business.
10. Banks may face cultural and linguistic barriers in effectively communicating with migrant populations, which could make it difficult for them to understand and respond to their needs.
Collectively, as remittances lower, we lose out on the effects of trickle-down economics in developing countries, detrimentally harming how they operate and even stunting their growth in some cases. For the above reasons, our app could not be a traditional online banking system.
We feel there is an ethical responsibility to help other countries benefit from remittances. Crucially, we feel there is an ethical responsibility to help socioeconomically marginalized communities help their loved ones. Hence, we wanted to use technology as a means to include, not exclude and built an app that we hope could be versatile and inclusive to the needs of our user. We needed our app design to be helpful towards our user - allowing the user to gain all the necessary information and make bill payments easier to do across the world. We carefully chose product design elements that were not wordy but simple and clear and provided clear action items that indicated what needed to be done. However, we anticipated the following ethical issues arising from our implementation :
1. Data privacy: Remittance payment apps collect a significant amount of personal data from users. It is essential to ensure that the data is used ethically and is adequately protected.
2. Security: Security is paramount in remittance payment apps. Vulnerabilities or data breaches could lead to significant financial losses or even identity theft. Fast transfers can often lead to mismanagement in accounting.
3. Accessibility: Migrants who may be unfamiliar with technology or may not have access to smartphones or internet may be left out of such services. This raises ethical questions around fairness and equity.
4. Transparency: It is important to provide transparent information to users about the costs and fees associated with remittance payment apps, including exchange rates, transfer fees, and any other charges. We even provide currency optimization features, that allows users to leverage low/high exchange rates so that users can save money whenever possible.
5. Inclusivity: Remittance payment apps should be designed to be accessible to all users, regardless of their level of education, language, or ability. This raises ethical questions around inclusivity and fairness.
6. Financial education: Remittance payment apps could provide opportunities for financial education for migrants. It is important to ensure that the app provides the necessary education and resources to enable users to make informed financial decisions.
Conscious of these ethical issues, we came up with the following solutions to provide a more principally robust app:
1. Data privacy: We collect minimal user data. The only information we care about is who sends and gets the money. No extra information is ever asked for. For undocumented immigrants this often becomes a concern and they cannot benefit from remittances. The fact that you can store the money within the app itself means that you don’t need to go through the bank's red-tape just to sustain yourself.
2. Security: We only send user data once the user posts a request from the sender. We prevent spam by only allowing contacts to send those requests to you. This prevents the user from sending large amounts of money to the wrong person. We made fast payments only possible in highly urgent queries, allowing for a priority based execution of transactions.
3. Accessibility: Beyond simple button clicks, we don’t require migrants to have a detailed or nuanced knowledge of how these applications work. We simplify the user interface with helpful widgets and useful cautionary warnings so the user gets questions answered even before asking them.
4. Transparency: With live exchange rate updates, simple reminders about what to pay when and to who, we make sure there is no secret we keep. For migrants, the assurance that they aren’t being “cheated” is crucial to build a trusted user base and they deserve to have full and clearly presented information about where their money is going.
5. Inclusivity: We provide multilingual preferences for our users, which means that they always end up with the clearest presentation of their finances and can understand what needs to be done without getting tangled up within complex and unnecessarily complicated “terms and conditions”.
6. Financial education: We provide accessible support resources sponsored by our local partners on how to best get accustomed to a new financial system and understand complex things like insurance and healthcare.
Before further implementation, we need to robustly test how secure and spam-free our payment system could be. Having a secure payment system is a high ethical priority for us.
Overall, we felt there were a number of huge ethical concerns that we needed to solve as part of our product and design implementation. We felt we were able to mitigate a considerable percentage of these concerns to provide a more inclusive, trustworthy, and accessible product to marginalized communities and immigrants across the world. | ## Inspiration
As international students, we often have to navigate around a lot of roadblocks when it comes to receiving money from back home for our tuition.
Cross-border payments are gaining momentum with so many emerging markets. In 2021, the top five recipient countries for remittance inflows in current USD were India (89 billion), Mexico (54 billion), China (53 billion), the Philippines (37 billion), and Egypt (32 billion). The United States was the largest source country for remittances in 2020, followed by the United Arab Emirates, Saudi Arabia, and Switzerland.
However, Cross-border payments face 5 main challenges: cost, security, time, liquidity & transparency.
* Cost: Cross-border payments are typically costly due to costs involved such as currency exchange costs, intermediary charges, and regulatory costs.
-Time: most international payments take anything between 2-5 days.
-Security: The rate of fraud in cross-border payments is comparatively higher than in domestic payments because it's much more difficult to track once it crosses the border.
* Standardization: Different countries tend to follow a different set of rules & formats which make cross-border payments even more difficult & complicated at times.
* Liquidity: Most cross-border payments work on the pre-funding of accounts to settle payments; hence it becomes important to ensure adequate liquidity in correspondent bank accounts to meet payment obligations within cut-off deadlines.
## What it does
Cashflow is a solution to all of the problems above. It provides a secure method to transfer money overseas. It uses the checkbook.io API to verify users' bank information, and check for liquidity, and with features such as KYC, it ensures security in enabling instant payments. Further, it uses another API to convert the currencies using accurate, non-inflated rates.
Sending money:
Our system requests a few pieces of information from you, which pertain to the recipient. After having added your bank details to your profile, you will be able to send money through the platform.
The recipient will receive an email message, through which they can deposit into their account in multiple ways.
Requesting money:
By requesting money from a sender, an invoice is generated to them. They can choose to send money back through multiple methods, which include credit and debit card payments.
## How we built it
We built it using HTML, CSS, and JavaScript. We also used the Checkbook.io API and exchange rate API.
## Challenges we ran into
Neither of us is familiar with backend technologies or react. Mihir has never worked with JS before and I haven't worked on many web dev projects in the last 2 years, so we had to engage in a lot of learning and refreshing of knowledge as we built the project which took a lot of time.
## Accomplishments that we're proud of
We learned a lot and built the whole web app as we were continuously learning. Mihir learned JavaScript from scratch and coded in it for the whole project all under 36 hours.
## What we learned
We learned how to integrate APIs in building web apps, JavaScript, and a lot of web dev.
## What's next for CashFlow
We were having a couple of bugs that we couldn't fix, we plan to work on those in the near future. | # Sync
Always in sync for humanity
Introducing **Sync**: a robust multi-agent framework designed to streamline disaster response and resource allocation.
With our intuitive web app, citizens and aid workers can easily submit *real-time reports*, which our regional reporting agent aggregates and analyzes. Sync then generates comprehensive reports and efficiently requests resources from appropriate NGOs based on recommendations from expert agents, ensuring timely and effective aid distribution. **Transform chaos into coordinated action with Sync — where every report makes a difference**.
## Motivation
The motivation for creating Sync stems from the pressing need to improve disaster response efficiency. We were inspired in particular by our awareness of ongoing wildfire situations in California. Research highlights that delays in disaster response are often due to fragmented communication, inefficient data aggregation, and slow resource allocation. Sync addresses these challenges by providing a seamless platform where real-time reports from citizens and aid workers are quickly aggregated and analyzed by intelligent agents. This enables a decentrallized model of decision-making that is faster and more accurate. Our goal is to reduce response times, enhance coordination, and ultimately save lives and property in critical disaster scenarios, with a focus on the ongoing climate crisis. Our product also extends the reach of relief agencies by alerting them of relevant disaster scenes which fall in their area of practice, all without having to directly hire ground-level reporters and data collectors
## Challenges and Takeaways
* **Real-time Synchronization Across Multiple Devices**: Ensuring that data updates and reports are synchronized in real-time across multiple devices required robust backend-frontend architecture and efficient data handling techniques
* **Structured Output from agents for cost projections and optimal NGO routing**: Developing a multi-agent system that can communicate and share information effectively was a challenging but rewarding experience. We also had to ensure that data was structured enough to be served to the frontend
## Multi-Agent Architecture


## Techstack
LangChain & LangGraph, NextJS, Tailwind, FastAPI, Anthropic
## Developers
[@uzairname](https://www.github.com/uzairname)
[@sanya1001](https://www.github.com/sanya1001)
[@sidb70](https://www.github.com/sidb70) | winning |
## What Is It
The Air Synth is a virtual synthesizer that can be played without the need for a physical instrument. By simply moving your fingers in the air, the Air Synth matches your motions to the correct pitches and allows you to practice, jam, and compose wherever and whenever you want.
## How It Works
The Air Synth uses OpenCV to detect the contours of a hand. This is done with background subtraction, a technique that compares a static background image with the live camera feed. The resulting image is passed through a series of filters: black and white filter and a gaussian blur. Then the contour lines are drawn over the hand and critical points are identified. We map these critical points to a range of y-values on the GUI in order to determine which note should be played. | ## Inspiration
Many people were left unprepared to effectively express their ideas over Zoom. As students we’ve noticed that with remote learning, instructors seem to feel very distant, teaching through either slides or drawing on a tablet, without us being able to see their face. We know body language is an important part of being a presenter, so why should Zoom limit us like this? Aside from instructional content, Zoom meetings are boring so we ought to spice it up. Introducing AirCanvas!
## What it does
AirCanvas allows you to have a virtual whiteboard powered by Deep Learning and Computer Vision! Simply stick out your index finger as the pen, and let your imagination unfold in the air that is your canvas. This adds an entirely new dimension of communication that is not possible in person. You can also erase the drawings, drag the canvas around and draw with both hands simultaneously!
## How we built it
We begin with OpenPose, a project by researchers at Carnegie Mellon which performs pose estimation on a video stream in real time. With it, we are able to extract information about one’s poses, including the locations of joints. With this information, we have developed our own algorithm to recognize gestures that indicate the user’s desired actions. We additionally employ motion smoothing, which we will discuss a bit later. Combining these, we can render the user’s drawings onto the webcam stream, as if the user is drawing in mid-air.
## Challenges we ran into
One of the major challenges was, due to the low framerates of OpenPose, the paths generated are choppy, where the lines go back and forth. To solve this issue, we employed motion smoothing, and we can do this rather efficiently in O(1) runtime with our own data structure. This way, we get smooth lines without much performance penalty. In addition, we spent many hours testing our model and refining hyperparameters to ensure it works intuitively.
## What's next for AirCanvas
We were inexperienced with this technology, and we wish we had more time to add the following features:
Support for multiple people drawing with one camera simultaneously
A friendlier interface for user to customize pen settings such as color, thickness etc.
Tools you’d see in note taking apps, such as ruler | ## Inspiration
A deep and unreasonable love of xylophones
## What it does
An air xylophone right in your browser!
Play such classic songs as twinkle twinkle little star, ba ba rainbow sheep and the alphabet song or come up with the next club banger in free play.
We also added an air guitar mode where you can play any classic 4 chord song such as Wonderwall
## How we built it
We built a static website using React which utilised Posenet from TensorflowJS to track the users hand positions and translate these to specific xylophone keys.
We then extended this by creating Xylophone Hero, a fun game that lets you play your favourite tunes without requiring any physical instruments.
## Challenges we ran into
Fine tuning the machine learning model to provide a good balance of speed and accuracy
## Accomplishments that we're proud of
I can get 100% on Never Gonna Give You Up on XylophoneHero (I've practised since the video)
## What we learned
We learnt about fine tuning neural nets to achieve maximum performance for real time rendering in the browser.
## What's next for XylophoneHero
We would like to:
* Add further instruments including a ~~guitar~~ and drum set in both freeplay and hero modes
* Allow for dynamic tuning of Posenet based on individual hardware configurations
* Add new and exciting songs to Xylophone
* Add a multiplayer jam mode | partial |
## Inspiration
Our inspiration comes from the 217 million people in the world who have moderate to severe vision impairment, and 36 million people who are fully blind. In our modern society, with all its comforts, it is easy to forget that there are so many people who do not have the same luxuries as us. It is unthinkably difficult for these visually impaired individuals to navigate everyday life and activities. We believe that the new technology of this era presents a potential solution to this issue.
## What it does
InsightAI detects the location and size of common objects in real time. This data is necessitated by our novel 3D audio spatialization algorithm, which in turn, powers our Augmented Reality audio system. This system communicates the location of said objects to the user and allows for the formulation of a mental heatmap of the world. All of this is done through just a conventional mobile smartphone and headphones. This process can be terminated simply using our intuitive haptic user experience (so that it is accessible for those with vision impairments). It also supports multiple languages in order for the project to be scalable to other countries and cultures.
## How we built it
We used Tensorflow.js for the real-time object detection. It is trained on the COCO Single Shot MultiBox Detection dataset with 90 object classes and 330,000 images. We then convert the object(s) into an audio signal via a text-to-speech algorithm with natural language synthesis that supports multiple languages. We then used a custom algorithm to effectively deliver the AR audio to the user’s audio device, in such a manner, that the user can understand the location of the indicated object. In order to properly interface with the visually impaired, we focused on minimalistic and intuitive audio-first design principles to facilitate usage by the intended audience. Finally we hosted the entire web app on Zeit to allow it to be accessible to everyone.
## How does the augmented reality (AR) sound system work?
The sound is outputted binaurally through the web audio API. This means that we play each headphone or earbud differently, based on the location of the object. The differentiation in the sound is determined by our algorithm. You can think of our algorithm as a program that creates an mental audio data heatmap of the world around the user. Because of this immersive system, the user can very intuitively locate objects.
## Challenges we ran into
There were a multitude of bugs, which were eventually solved through discussion and collaboration. One such bug was that the audio was quite slow and did not match with the rate of object detection, because we were downloading the audio snippet from an external source for every frame. We found a solution to this problem by downloading the files locally and playing those files complementing the objects detected. Additionally, we ran into many issues pertaining to getting the tensorflow.js model to work with mobile instead of desktop.
## Accomplishments that we're proud of and what we learned
We are proud that we learned how to use Tensorflow.js to recognize many objects in real time, as this was one of our first projects that used live ML, and we are very proud of how it turned out. We also learned how to use the Web Audio API and created a surround sound left and right channel system using headphones. Further, this was one of our first projects to integrate AR.
## What's next for InsightAI
We will definitely be updating our project in the future to support more functionality. For example, optical character recognition and facial recognition could be used to greatly make the lives of the visually impaired in everyday life. Imagine if the blind could immediately recognize people they knew through such a system. An integrated OCR system would open up the possibility for writing unaccompanied by braille to be understood by the impaired, allowing for much easier navigation of both everyday life. Our app is very capable of scaling up to multiple different languages as well. | ## Inspiration
In today’s day and age, there are countless datasets available containing valuable information about any given location. This includes analytics based on urban infrastructures (dangerous intersections), traffic, and many more. Using these datasets and recent data analytics techniques, a modernized approach can be taken to support insurance companies with ideas to calculate effective and accurate premiums for their clients. So, we created Surely Insured, a platform that leverages this data and supports the car insurance industry. With the help and support from administrations and businesses, our platform can help many insurance companies by providing a modernized approach to make better decisions for pricing car insurance premiums.
## What it does
Surely Insured provides car insurance companies with a data-driven edge on calculating premiums for their clients.
Given a location, Surely Insured provides a whole suite of information that the insurance company can use to make better decisions on insurance premium pricing. More specifically, it provides possible factors or reasons for why your client's insurance premium should be higher or lower.
Moreover, Surely Insured serves three main purposes:
* Create a modernized approach to present traffic incidents and severity scores
* Provide analytics to help create effective insurance premiums
* Use the Google Maps Platform Geocoding API, Google Maps Platform Maps JavaScript API, and various Geotab Ignition datasets to extract valuable data for the analytics.
## How we built it
* We built the web app using React as the front-end framework and Flask as the back-end framework.
* We used the Google Maps Platform Maps Javascript API to dynamically display the map.
* We used the Google Maps Platform Geocoding API to get the latitude and longitude given the inputted address.
* We used three different Geotab Ignition datasets (HazardousDrivingAreas, IdlingAreas, ServiceCenterMetrics) to calculate metrics (with Pandas) based on the customer's location.
## Challenges we ran into
* Integrating the Google Maps Platform JavaScript API and Google Maps Platform Geocoding API with the front-end was a challenge.
* There were a lot of features to incorporate in this project, given the time constraints. However, we were able to accomplish the primary purpose of our project, which was to provide car insurance companies an effective method to calculate premiums for their clients.
* Not being able to communicate face to face meant we had to rely on digital apps, which made it difficult to brainstorm concepts and ideas. This was exceptionally challenging when we had to work together to discuss potential changes or help debug issues.
* Brainstorming a way to combine multiple API prizes in an ambitious manner was quite a creative exercise and our idea had gone through multiple iterations until it was refined.
## Accomplishments that we're proud of
We're proud that our implementation of the Google Maps Platform APIs works as we intended. We're also proud of having the front-end and back-end working simultaneously and the overall accomplishment of successfully incorporating multiple features into one platform.
## What we learned
* We learned how to use the Google Maps Platform Map JavaScript API and Geocoding API.
* Some of us improved our understanding of how to use Git for large team projects.
## What's next for Surely Insured
* We want to integrate other data sets to Surely Insured. For example, in addition to hazardous driving areas, we could also use weather patterns to assess whether insurance premiums should be high or low. \* Another possible feature is to give the user a quantitative price quote based on location in addition to traditional factors such as age and gender. | ## Inspiration
Our inspiration stems the difficulty and lack of precision that certain online vision tests suffer from. Issues such as requiring a laptop and measuring distance by hand lead to a cumbersome process. Augmented reality and voice-recognition allow for a streamlined process that can be accessed anywhere with an iOS app.
## What it does
The app looks for signs of colorblindness, nearsightedness, and farsightedness with Ishihara color tests and Snellen chart exams. The Snellen chart is simulated in augmented reality by placing a row of letters six meters away from the camera. Users can easily interact with the exam by submitting their answers via voice recognition rather than having to manually enter each letter in the row.
## How we built it
We built these augmented reality and voice recognition features by downloading the ARKit and KK Voice Recognition SDKs into Unity 3d. These SDKs exposed APIs for integrating these features into the exam logic. We used Unity's UI API to create the interface, and linked these scenes into a project built for iOS. This build was then exported to XCode, which allowed us to configure the project and make it accessible via iPhone.
## Challenges we ran into
Errors resulting from complex SDK integrations made the beginning of the project difficult to debug. After this, a lot of time was spent trying to control the scale and orientation of augmented reality features in the scene in order to create a lifelike environment. The voice recognition software presented difficulties as its API was controlled by a lot of complex callback functions, which made the logic flow difficult to follow. The main difficulty in the latter phases of the project was the inability to test features in the Unity editor. The AR and voice-recognition APIs relied upon the iOS operating system which meant that every change in the code had to be tested through a long build and installation process.
## Accomplishments that we're proud of
With only one of the team members having experience with Unity, we are proud of constructing such a complex UI system with the Unity APIs. Also, this was the team's first exposure to voice-recognition software. We are also proud to have used what we learned to construct a cohesive product that has real-world applications.
## What we learned
We learned how to construct UI elements and link multiple scenes together in Unity. We also learned a lot about C# through manipulating voice-recognition data and working with 3D assets, all of which is new to the team.
## What's next for AR Visual Acuity Exam
Given more time, the app would be built out to send vision exam results to doctors for approval. We could also improve upon the scaling and representation of the Snellen chart. | partial |
## Inspiration
The general challenge of UottaHack 4 was to create a hack surrounding COVID-19. We got inspired by a COVID-19 restriction in the province of Quebec which requires stores to limit the number of people allowed in the store at once (depending on the store floor size). This results in many stores having to place an employee at the door of the shop to monitor the people entering/exiting, if they are wearing a mask and to make sure they disinfect their hands. Having an employee dedicated to monitoring the entrance can be a financial drain on a store and this is where our idea kicks in, dedicating the task of monitoring the door to the machine so the human resources could be best used elsewhere in the store.
## What it does
Our hack monitors the entrance of a store and does the following:
1. It counts how many people are currently in the store by monitoring the number of people that are entering/leaving the store.
2. Verifies that the person entering is wearing PPE ( a mask ). If no PPE was recognized, and a reminder to wear a mask is played from a speaker on the Raspberry Pi.
3. Verify that the person entering has used the sanitation station and displays a message thanking them for using it.
4. Display information to people entering such as. how many people are in the store and what is the store's max capacity, reminders to wear a mask, and thanks for using the sanitation station
5. Provides useful stats to the shop owner about the monitoring of the shop.
## How we built it
**Hardware:** The hack uses a Raspberry Pi and it PiCam to monitor the entrance.
**Monitoring backend:** The program starts by monitoring the floor in front of the door for movement this is done using OpenCV. Once movement is detected pictures are captured and stored. the movement is also analyzed to estimate if the person is leaving or entering the store. Following an event of someone entering/exiting, a secondary program analyses the collection of a picture taken and submits chooses one of them to be analyzed by google cloud vision API. The picture sent to the google API looks for three features: faces, object location (to identify people's bodies), and labels (to look for PPE). Using the info from the Vision API we can determine first if the person has PPE and if the difference in the number of people leaving and entering by comparing the number of faces to the body detected. if the is fewer faces than bodies then that means people have left, if there is the same amount then only people entered. Back on the first program, another point is being monitored which is the sanitation station. if there is an interaction(movement) with it then we know the person entering has used it.
**cloud backend:**
The front end and monitoring hardware need a unified API to broker communication between the services, as well as storage in the mongoDB data lake; This is where the cloud backend shines. Handling events triggered by the monitoring system, as well as user defined configurations from the front end, logging, and storage. All from a highly available containerized Kubernetes environment on GKE.
**cloud frontend:**
The frontend allows the administration to set the box parameters for where the objects will be in the store. If they are wearing a mask and sanitized their hands, a message will appear stating "Thank you for slowing the spread." However, if they are not wearing a mask or sanitized their hands, then a message will state "Please put on a mask." By doing so, those who are following protocols will be rewarded, and those who are not will be reminded to follow them.
## Challenges we ran into
On the monitoring side, we ran into problems because of the color of the pants. Having bright-colored pants registered as PPE to Google's Cloud Vision API (they looked to similar to reflective pants PPe's).
On the backend architecture side, developing event driven code was a challenge, as it was our first time working with such technologies.
## Accomplishments that we're proud of
The efficiency of our computer vision is something we are proud of as we initially started with processing each frame every 50 milliseconds, however, we optimized the computer vision code to only process a fraction of our camera feed, yet maintain the same accuracy. We went from 50 milliseconds to 10 milliseconds
## What we learned
**Charles:** I've learn how to use the google API
**Mingye:** I've furthered my knowledge about computer vision and learned about google's vision API
**Mershab:** I built and deployed my first Kubernetes cluster in the cloud. I also learned event driven architecture.
## What's next for Sanitation Station Companion
We hope to continue improving our object detection and later on, detect if customers in the store are at least six feet apart from the person next to them. We will also remind them to keep their distance throughout the store as well. Their is also the feature of having more then on point of entry(door) monitored at the same time. | ## Inspiration
When we thought about tackling the pandemic, it was clear to us that we'd have to **think outside the box**. The concept of a hardware device to enforce social distancing quickly came to mind, and thus we decided to create the SDE device.
## What it does
We utilized an ultra-sonic sensor to detect bodies within 2m of the user, and relay that data to the Arduino. If we detect a body within 2m, the buzzer and speaker go off, and a display notifies others that they are not obeying social distancing procedures and should relocate.
## How we built it
We started by creating a wiring diagram for the hardware internals using [Circuito](circuito.io). This also provided us some starter code including the libraries and tester code for the hardware components.
We then had part of the team start the assembly of the circuit and troubleshoot the components while the other focused on getting the CAD model of the casing designed for 3D printing.
Once this was all completed, we printed the device and tested it for any bugs in the system.
## Challenges we ran into
We initially wanted to make an Android partner application to log the incidence rate of individuals/objects within 2m via Bluetooth but quickly found this to be a challenge as the team was split geographically, and we did not have Bluetooth components to attach to our Arduino model. The development of the Android application also proved difficult, as no one on our team had experience developing Android applications in a Bluetooth environment.
## Accomplishments that we're proud of
Effectively troubleshooting the SDE device and getting a functional prototype finished.
## What we learned
Hardware debugging skills, how hard it is to make an Android app if you have no previous experience, and project management skills for distanced hardware projects.
## What's next for Social Distancing Enforcement (SDE)
Develop the Android application, add Bluetooth functionality, and decrease the size of the SDE device to a more usable size. | ## Inspiration
Are you tired of the endless hours spent deciphering course schedules, and trying to align your classes with your friends' schedules? Well, that's a problem we University students face every term. Say goodbye to the hassle and confusion, and welcome the innovative solution: myUniCourseBuddy.
## What it does
myUniCourseBuddy allows you to sync preferences and effortlessly generates an ideal schedule that maximizes shared classes while accommodating individual customizations. With features like real-time updates, conflict resolution, and intuitive design, this app ensures a collaborative and stress-free approach to planning your academic journey.
## Challenges we ran into
One of the most significant hurdles we faced was solving the course selection algorithm to generate optimal schedules. Crafting an algorithm that could analyze the diverse preferences of individual users, while also factoring in their friends' preferences and various course offerings, proved to be extremely difficult. The challenge lay in the technical complexity and time efficiency of the algorithm.
## Accomplishments that we're proud of
We're our hard work and persistence throughout tough problems.
## What we learned
Although the project was not completed to our expectations, we learned a lot while struggling on this journey, including a new tech stack and new debugging skills. As someone once said, "To rank the effort above the prize may be called love." | partial |
## Inspiration
We are a group of students passionate about automation and pet companions. However, it is not always feasible to own a live animal as a busy engineer. The benefits of personal companionship are plentiful, including decreased blood pressure. Automation is the way of the future. We developed routines using a iRobot Create 2 robot which can dance to music, follow its owner like a dog, and bring items from another room on its top.
## What it does
Spot uses visual processing and image recognition to follow its owner all over their home. He is a helpful companion capable of carrying packages, providing lighting and cleaning for his owner. Furthermore, his warm and friendly appearance is always welcome in any home. The robot platform used also has the capability for autonomous floor cleaning. Finally, Spot's movements can be controlled through a web application which also displays graphs from all the Roomba's sensors.
## How we built it
Spot was built using a variety of different software. The webpage used to control spot was coded in HTML and CSS with Django/Python running in the backend. To control the roomba and display the sensor graphs we used matlab. To do the image processing and get the roomba to follow specific colours the openCV library with python bindings was used.
## Challenges we ran into
One major challenge was being able to display the all the graphs/data on the website in real time. Having different APIs for Python and Matlab was a struggle which we overcame.
## Accomplishments that we're proud of
As a group of relatively new hackers who met at YHacks, we are extremely proud of being able to use our different engineering disciplines to implement both hardware and software into our hack. We are proud of the fact that we were able to learn about image processing, Django/Python and use them to control the movements of the Roomba. In addition to completing all 3 iRobot challenges, we were still able to accomplish 2 tasks of our own and learned plenty of things along the way!
## What we learned
Throughout the creation of spot our group learned many new technologies. As a group we learned how to run Django in the back end of a webpage and be able to control a roomba through a webpage. In addition, we were able to learn about the openCv library and how to control the roomba through image processing. We were also able to learn how to do various things with the roomba, such as making it sing, manipulating the sensor data to produce graphs and track its movements.
## What's next for Spot
Spot has many real world applications. As a mobile camera-enabled robot, it can perform semi-autonomous security tasks e.g. patrolling. Teleoperation is ideal for high-risk situations, including bomb disposal, fire rescue, and SWAT. This device also has therapeutic applications such as those performed by the PARO seal robot---PTSD treatment and personal companionship. As a generic service robot, the robot can include a platform for carrying personal items. A larger robot could assist with construction, or a stainless steel robot could follow a surgeon with operating tools. | ## Inspiration
These days, we’re all stuck at home, you know, cuz of COVID-19 and everything.
If you’re anything like me, the lockdown has changed my online shopping habits quite a bit, and I find myself shopping on websites like Amazon way more than I used to.
With this change to my lifestyle, I realized that e-commerce is weirdly inconvenient when you really want to get the best price. You have to go check each retailer’s website individually, and compare prices manually. I then thought to myself, “Hey, it would be really cool if I had a price comparison tool to search for the best price across a variety of online retailers”. Before I knew it, I came up with Buy It, an application that does just that.
## What it does
Given a product name to search, Buy It will return search results from different online retailers giving you all the information you need to save money, all in one place.
## How I built it
Buy It is a mobile app developed in Android Studio using Java and Kotlin. The pricing data presented in the app is actually collected via web scraping. To do that, I leveraged the ParseHub API after training their scraping bot to search e-commerce websites with whatever product the user is interested in. The main issue is, the API isn’t easily compatible with Java, so I thought to host an API endpoint with Flask in Python to then make data available for my Java app to request. But I had a better idea: using Solace’s PubSub+ Event Broker. This made a lot of sense, since Solace’s event driven programming solution allows for easy two-way communication between both the client and the publisher.
That means my app can send the user’s product search to the Python script, Python runs the API on that search to initiate a web scraping action, then Python also sends back whatever data it collects. Meanwhile, the Java app was simply waiting for messages on the topic of the search product. After receiving a response, it can then process the pricing information and display it to the user.
Another cool thing about using Solace is that it’s an extremely scalable solution. I could make this app track prices real time from a large number of retailers at once, in other words, I’d be generating constant heavy network traffic, and the event broker would have no issue. By leveraging PubSub+, I was able to make a quick, easy, and powerful communication pathway for all the components of my hack to talk through, while also future-proofing my app for further development and updates.
## Challenges we ran into
Unfortunately, I was not quite able to get the finishing touches in for this hack. The app was even able to receive the pricing information, it was just a matter of parsing it and displaying it in a user-friendly manner. With my only team member not being available to work with me last minute, this ended up being a solo hack programmed in Java and Python, languages I’ve never used before. That being said, the majority of it is done, and I put an immense amount of effort into learning all this.
## Accomplishments that we're proud of
I'm super excited about how much I've learned. I used Solace which was very useful and educational. I've also never built a mobile app, never done web scraping, never coded in Python or Java/Kotlin, so lots of new stuff. I was also impressed with my progress as a solo hack.
## What we learned
As previously mentioned, I learned how to build a mobile app in Android Studio with Java/Kotlin. I learned more about programming in Python (particularly API endpoints and such), as well as web scraping. Last but not least, I learned about Solace event brokers and how they can replace REST API endpoints.
## What's next for Buy It!
I was thinking of creating a watchlist, where the user can save items, which will subscribe the app to a topic that receives real-time price changes for said items. That way, the user is notified if the item is being sold below a certain price threshold. Because Buy It is built off of such a strong foundation with Solace, there’s lots of room for new features and enhancements, so I really look forward to getting creative with Buy It! | I created a command line utility that allows the user to view a tagcloud of the terms used in the EDGAR documents for the annual reports. This utility creates a visualization that shows the most common terms, ignoring outliers with the color darkness relating directly to the relationship that term has to the argument. For example, if 'manufacturing' is the argument, the terms most related to manufacturing will be the brightest while the terms less important will be faded out. | losing |
# What's that space by Space Invaders (Team 13)
Available at <https://whatsthat.space/> *(it's designed for mobile ([How to View Mobile View in Chrome](https://www.browserstack.com/guide/view-mobile-version-of-website-on-chrome)))*
Github at <https://github.com/rafeeJ/UO-Hack>
Presentation at <https://docs.google.com/presentation/d/10gH-nAkfvkol3nwRUrlTFk5scTZZ_7PH27EZIoUwCaw/edit?usp=sharing>
## Inspiration
We have enjoyed [GeoGuessr](https://www.geoguessr.com/) and see other fun location-based activities like [Geocaching](https://www.geocaching.com/play). We thought it would be fun to make our own location-based app that combined the two ideas while trying out some new tech!
Since Covid hit last march, it's been difficult to socialise and see friends and family. This led us to combine the Health and Community paths of the hack, we wanted to encourage people to get out and exercise, allow them to socialise (safely!) and discover their local area. There are plenty of local beauty spots all around us that we might not have seen before! What's That Space encourages people to get out, take pretty photos and challenge other people to discover the area.
## What It Does
There are two types of players: Explorers and Creators (you can be both!).
**Creators** submit photos of beauty spots to the app, which get posted onto maps for the explorers to find. The catch: only the *general* location of the original photo's geolocation is provided to the explorer. This makes recreating the photo a fun challenge.
**Explorers** use the general location provided by the app to try their best to recreate the photo that the creator submitted. Explorers then receive points based on how quickly and how well they were able to recreate the Creator's photo. Complete challenges to rise up the global leaderboard and earn crypto rewards based on your score!
## How We Built It
We first took some time to plan out our ideas and designs so we had something to quickly go off, this helped us all get on the same wavelength and get creating quickly!
The core of our project is built within Angular 10, this allowed us to quickly create an attractive UI with all the functionality we required. Plus Google's Firebase for the middleware and database layer. Firebase was helpful as it hosts all of the images, user data and authenticates all of the users for the app!
We also got to use the Google Maps API, it was fascinating to play around with this for the first time and manage to get a working map we could interact with. This allowed us to display challenges around the world (check it out we've added some sample ones!) and make the app feel more alive!
Alongside this, we added small extras such as using a Google Cloud VM, Python and DataStax Astra DB to generate Nimiq (Node.js based Crypto) cash links to reward users for their score (available in profile with a score > 0).
## Challenges We Ran Into
We all tried to pick roles which were unfamiliar to us, and use new tools. This meant that there was a lot of learning! This was particularly true of Firebase as only 1/4 of us had any experience and it was crucial to our project working successfully, luckily through a lot of documentation we got a DB going, our app hosted and user authentication. Alongside this half of us hadn't used Angular, and decided to dive right in and not take a backseat, this took lots of working together to figure out a ton of bugs and get the design right, but we've now got something we're proud of!
## Accomplishments That We're Proud Of
**Rafee**
I have an Angular background, so I took a backseat on the frontend logic and opted to focus on the style of the app. I implemented Google Material to make sure the design was professional, and for its mobile capabilities. I feel a lot more confident in my SCSS abilities after this hack, so I appreciate that. I also gained experience working with Domain.com and Cloudflare to ensure a HTTPS connection to the site. This was to ensure that user data cannot be leaked and that the webcam is properly utilised.
**Tom**
Before doing this project I had little experience with Angular and Firebase. I was impressed with how quickly these technologies allowed us to build a working product. I was particularly proud of the work I did to integrate the [ngx-webcam](https://www.npmjs.com/package/ngx-webcam) package, this involved learning about the ImageData web interface, which was another first for me. I also set up Firebase Storage for saving the user's images and a private moderator page where moderators can accept or reject user submissions to the challenges.
**Adam**
I spend most of my time working with server code so this was my first experience working with a *serverless* architecture.
I enjoyed encountering problems from a new perspective.
Before this experience, I had almost no experience with Angular and Firebase and that meant that integrating communication between the two proved to be a real challenge.
However, I learned a lot about the two during the hackathon and managed to establish a fully-fledged service which communicated between the two.
**Jake**
First and foremost I've enjoyed working with Firebase for the first time, and seeing how seamlessly google cloud services fit together. While quite a small part, I'm most proud of managing to add a rewards system (based on users points) which uses a Google VM serving a Python Flask API which uses the Nimiq (Node based Cryptocurrency/Ecosystem) to generate cash links as rewards (i.e. <https://gyazo.com/975966d01662557d7fb861d18311fe51>). Also using a SQL database for the first time in DataStax Astra to store user data to required allow reward generation.
**As a Team**
Taking part in a hackathon where we have to create a project end-to-end so quickly was a completely new experience for us all. Luckily we decided to create a clear plan and stick to it, focusing on making the app fun and easy to use! Throughout the hackathon, we (tried to) always check against our requirements and pre-defined workflows to ensure we were going in the right direction. It has been an interesting (fun) challenge managing the workload, our skills and a git repo with 4 people pushing constantly!
## What We Learned
With this being our first hackathon we wanted to focus on two things; learning new things and creating an app we were proud of. It was a learning experience working to create something in such a small timeframe. We all took on a lot of learning using; Firebase, Angular & the Google Maps API.
As a result of our hard work and collaboration, we are really happy with the work we produced. From the beginning, we took the opportunity to put the project management theory we had learnt at university into action and we believe that the practical skills we picked up during the hackathon made a huge difference to our productivity.
## Challenges
**Sun Life** - One of the key focuses of our app has been remote socialisation and we think our app does a great job of this! Having been through many lockdowns in the UK, our focus was allowing safe socialisation and trying to make personal connections. To do this we've focused on being able to get out in your local area and explore, able to leave challenges for your friends (and others)! Plus if you can't get out, you can do it all from Google Streetview in the comfort of your own home.
**Domain.com** - [whatsthat.space](https://whatsthat.space) - Our app is all about finding locations (finding out whats the space in the picture) and it fits great in a domain!
**Google Cloud** - We wanted to ensure that our app was accessible to anyone in the world (especially since we’re in europe not Ottawa!), so the scalability and flexibility of a cloud platform was essential! 3/4 of us had never used it before so it was a great learning experience, allowing us to implement hosting (to match our amazing domain name), our main database and user authentication. This was really useful for us as the huge range of services gave us everything we needed and it allowed us to play around with other things like cloud functions (for a message of the day - <https://us-central1-uottowahack.cloudfunctions.net/function-3>) and hosting an API on Google cloud VMs (generating rewards - <http://65.52.241.124:5000/cash>).
## What's next for What's That Space
We would love to expand the app and make it more robust, expanding the points system and adding local leaderboards. Initially, we had hoped we could use AI image matching to spot images but quickly realised this was out of scope so we would love to add this in future or allow player verification so it doesn't rely on us! | ## Inspiration
As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus.
## What it does
A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings.
## How we built it
Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes.
## Challenges we ran into
* Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible
* Bypassing CORS permissions when accessing the user's camera
## Accomplishments that we're proud of
* Using a new SDK purposely to address an issue that was relevant to our team
* Going through the development process, and gaining a range of experiences over a short period of time
## What we learned
* Planning time effectively and redirecting our goals accordingly
* How to learn by collaborating with team members to SDK experts, as well as reading documentation.
* Our tech stack
## What's next for LooGuessr
* creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years
* considering freemium components, such as extra guesses, 360-view, and interpersonal wagers
* showcasing 360-picture view by stitching together a video recording from the user
* addressing privacy concerns with image face blur and an option for delaying showing the image | ## Inspiration## Inspiration
As the summer break nears its end, we and our friends planned numerous trips around the city. Although we did have a positive experience overall, due to poor planning, we wasted a lot of time and our schedule always took unexpected turns. Thus, we wanted to tackle this issue and optimize our meaningful time with our friends. Consequently, we decided to find a solution to improve event planning as a whole. Despite having no experience in app development, we decided to design a prototype mobile application since it would be an effective way to give access to others around the globe.
## What it Does
“What’s the Plan” is an app prototype that essentially takes away all the main complaints that people have when planning a group or solo outing whether it be big or small. To begin there is a calendar section where you are able to set the days you are unavailable, and the app will compare your calendar to the others in your group (which were previously added into the group event via social media quick adding) This will then allow the app to set on a date without the hassle of confirming a date one by one with everyone involved. This furthermore also adjusts itself when group members have changes of plans and sends out notifications to adjust dates. In addition to finding a date that works, finding activities that work for everyone while also being indecisive on what to do was a common problem, to remedy this the ideas sections comes in handy. Here all members can add in their ideas, linked to an actual place where we can go using the built in search algorithm that will use keywords and give back options based off of star ratings, user interactions and location proximity. This will allow the app to right away know where the idea will take place and for more detail to be involved in the voting section, where everyone can vote days in advance of the trip on the different activities. Once all the activities are decided they are seamlessly converted into a readable itinerary that is compact, but can be broken down for more detail on any specific activity. It can be save to drive in the case that Wi-Fi is unavailable. All sections include a messenger icon, which can be swapped with any other linked social media, which links the app to any group chat of choice so that messaging and planning can be done seamlessly through cross- app interaction. Finally, the to-bring/ to-do list section is split into 2 parts: a solo list that is personal and is for yourself, and a group list where all the event-goers can have a communal list for anything that they will share as to not pack the same thing too many times.
## How we built it
After extensive brainstorming on the service and the many features, google jamboard was used to draw a very simple sketch of our desired app design. Afterwards we sought to create an app prototype design which was subsequently developed using the Figma software applying both UI and UX knowledge we were introduced to during the workshop events. We felt that this strategy suited our abilities as well as what we were interested in learning over the course of the event. We used the many resources at our disposal to design an app which has several features and many sections.
## Challenges we ran into
The primary challenge of our project was the lack of programming knowledge in general. Initially, we were driven to learn an app development software as fast as we could within the hackathon duration. Needless to say, we quickly recognized that none of us were experienced enough to learn such a diverse skill in such a short amount of time. Instead, as engineering students, we opted to design a prototype using Figma. For our first hackathon, we decided to thus focus on the aesthetics, UI and overall idea of the app.
The second issue was the rise of conflicting ideas within the team. Initially, we had decided the primary features of the app and each designed a different part of the application. Long story short, it turned out that we all had different ideas and creations. When we assembled all of our designs in one, it looked completely out of place. Because of this, we had to discuss the themes and aesthetic details before actually designing the frames. One advantage in creating three unique creations was that we could throw all of our ideas into a final, refined prototype. Overall, we learned that diverse ideas within a team are advantageous because we are able to build upon each others’ thought processes.
## Accomplishments that we're proud of
As this was our first hackathon we were very intimidated by the expectations to develop a solution to a problem during the limited competition period. By the end of the competition, we were able to apply our knowledge and experience to think of a real-life problem and create a feasible solution to it while developing a business pitch and thinking about the economics of the situation, which is a situation that resembles projects in the workplace that goes beyond the classroom. We are very proud of these accomplishments and are also very happy to gain skills in UI/UX design, and business and presentation skills. This positive experience has encouraged us to reach further and participate in future hackathons.
## What we learned
During the extensive weekend hackathon we have learned a multitude of both soft and technical skills. Technically, we gained experience in UI/UX design and learned to use the design software Figma. We also learned about the general idea of app development and the idea generating process. In terms of soft skills, business presentation and writing along with teamwork, conflict management, and compromise were some of the most critical skills that we took away from this hackathon. Both of these areas are highly applicable and beneficial to our future careers and academics. In addition, we learned many interesting and cool new skills through the many workshops that were held over the week. We particularly enjoyed the UI/UX workshops as it was most applicable to our project.
## What's next for “What's the plan”?
“What’s the plan?” is still very much in its prototyping phase and would definitely greatly benefit from the help of software engineers to bring it to life. The imaginative design and problem that it solves will definitely be able to be a great help to anyone whether it be to plan a move, a wedding, an event with friends, or travelling on your own. We hope to be able to bring all these features to life to be able to make having fun be as stress free and as consistent as possible, while allowing our users to be able to experience things they would never have before. | partial |
## Inspiration
3D Printing offers quick and easy access to a physical design from a digitized mesh file. Transferring a physical model back into a digitized mesh is much less successful or accessible in a desktop platform. We sought to create our own desktop 3D scanner that could generate high fidelity, colored and textured meshes for 3D printing or including models in computer graphics. The build is named after our good friend Greg who let us borrow his stereocamera for the weekend, enabling this project.
## How we built it
The rig uses a ZED stereocamera driven by a ROS wrapper to take stereo images at various known poses in a spiral which is executed with precision by two stepper motors driving a leadscrew elevator and a turn table for the model to be scanned. We designed the entire build in a high detail CAD using Autodesk Fusion 360, 3D printed L-brackets and mounting hardware to secure the stepper motors to the T-slot aluminum frame we cut at the metal shop at Jacobs Hall. There are also 1/8th wood pieces that were laser cut at Jacobs, including the turn table itself. We designed the power system around an Arduino microcontroller and and an Adafruit motor shield to drive the steppers. The Arduino and the ZED camera are controlled by python over a serial port and a ROS wrapper respectively to automate the process of capturing the images used as an input to OpenMVG/MVS to compute dense point clouds and eventually refined meshes.
## Challenges we ran into
We ran into a few minor mechanical design issues that were unforeseen in the CAD, luckily we had access to a 3D printer throughout the entire weekend and were able to iterate quickly on the tolerancing of some problematic parts. Issues with the AccelStepper library for Arduino used to simultaneously control the velocity and acceleration of 2 stepper motors slowed us down early Sunday evening and we had to extensively read the online documentation to accomplish the control tasks we needed to. Lastly, the complex 3D geometry of our rig (specifically rotation and transformation matrices of the cameras in our defined world coordinate frame) slowed us down and we believe is still problematic as the hackathon comes to a close.
## Accomplishments that we're proud of
We're proud of the mechanical design and fabrication, actuator precision, and data collection automation we achieved in just 36 hours. The outputted point clouds and meshes are still be improved. | ## Inspiration
I've always been fascinated by the complexities of UX design, and this project was an opportunity to explore an interesting mode of interaction. I drew inspiration from the futuristic UIs that movies have to offer, such as Minority Report's gesture-based OS or Iron Man's heads-up display, Jarvis.
## What it does
Each window in your desktop is rendered on a separate piece of paper, creating a tangible version of your everyday computer. It is fully featured desktop, with specific shortcuts for window management.
## How I built it
The hardware is combination of a projector and a webcam. The camera tracks the position of the sheets of paper, on which the projector renders the corresponding window. An OpenCV backend does the heavy lifting, calculating the appropriate translation and warping to apply.
## Challenges I ran into
The projector was initially difficulty to setup, since it has a fairly long focusing distance. Also, the engine that tracks the pieces of paper was incredibly unreliable under certain lighting conditions, which made it difficult to calibrate the device.
## Accomplishments that I'm proud of
I'm glad to have been able to produce a functional product, that could possibly be developed into a commercial one. Furthermore, I believe I've managed to put an innovative spin to one of the oldest concepts in the history of computers: the desktop.
## What I learned
I learned lots about computer vision, and especially on how to do on-the-fly image manipulation. | ## Inspiration
It's frustrating to always submit the same form over and over again at conferences! Using the IdentiFi wireless hardware tool, you can swipe your RFID tag onto a recruiter's table and have all your information automatically entered within their database. You can also authenticate based on parameters set up by the host. | winning |
Worldwide, there have been over a million cases of animal cruelty over the past decade. With people stuck at home, bottling up their frustrations. Moreover, due to COVID, these numbers aren’t going down anytime soon. To tackle these issues, we built this game, which has features like:
* Reminiscent of city-building simulators like Sim-City.
* Build and manage your animal shelter.
* Learn to manage finances, take care of animals, set up adoption drives.
* Grow and expand your shelter by taking in homeless animals, and giving them a life.
The game is built with Unity game engine, and runs on WebGL, utilizing the power of Wix Velo, which allows us to quickly host and distribute the game across platforms from a single code base. | ## Inspiration
There are two types of pets wandering unsupervised in the streets - ones that are lost and ones that don't have a home to go to. Pet's Palace portable mini-shelters services these animals and connects them to necessary services while leveraging the power of IoT.
## What it does
The units are placed in streets and alleyways. As an animal approaches the unit, an ultrasonic sensor triggers the door to open and dispenses a pellet of food. Once inside, a live stream of the interior of the unit is sent to local animal shelters which they can then analyze and dispatch representatives accordingly. Backend analysis of the footage provides information on breed identification, custom trained data, and uploads an image to a lost and found database. A chatbot is implemented between the unit and a responder at the animal shelter.
## How we built it
Several Arduino microcontrollers distribute the hardware tasks within the unit with the aide of a wifi chip coded in Python. IBM Watson powers machine learning analysis of the video content generated in the interior of the unit. The adoption agency views the live stream and related data from a web interface coded with Javascript.
## Challenges we ran into
Integrating the various technologies/endpoints with one Firebase backend.
## Accomplishments that we're proud of
A fully functional prototype! | # Arctic Miner - The Crypto Club Penguin
Basic blockchain game that allows the user to collect penguins, breed them as well as trade them. All penguins are stored on the ethereum blockchain.
## What it Does
Our program Arctic Miner is a collection game that takes advantage of the **ERC721** token standard. This project allows the user to have a decentralized inventory of penguins that have specific traits. The user can breed these penguins to create offspring that have different traits that takes advantage of our genetic algorithm. Each penguin is it's own **non-fungible** ERC721 token with it's own personal traits that are different from the other penguins.
## How We Built it
Arctic Miners runs on Angular front end with the smart contract being mode with Solidity
## What We Learned
Over the course of this hackathon, we learned a lot about various token standard as well as applications of the blockchain to create decentralized applications. In terms of sheer programming, we learned how to use Angular.js and Solidity. No group member had prior experience with either beforehand.
## Useful Definitions
**Non-Fungible:** non fungible tokens are tokens that have their own respective value and are not equivalent to other members of the same class. To explain metaphorically, a fungible token would be like a one dollar note. If I have a dollar note and you have a dollar note, and we trade them, neither one of us is at a loss since they are the exact same. A non-fungible token would be like a trading card. If I have a Gretzky rookie card and you have a Bartkowski card, technically both are hockey card, but the Gretzky card has a higher value than the Bartowski one; thus separating the two.
**ERC721:** this is a non-fungible token standard that differs from the more common ERC20 token which is fungible.
## Installation and setup
The node-modules/dependencies can be installed by running the following command in the root directory terminal:
```
npm install
```
Following dependencies in order to use the smart contracts:
* solidity 0.4.24
To compile the contracts first enter the following commands in the contracts folder:
```
npm install truffle
npm install babel-register
npm install babel-polyfill
npm install [email protected]
truffle compile
```
## How to run on a local machine
This web app was created using Angular.js
You will need to install Ganache if you would like to locally host the blockchain, otherwise the truffle file will need to be updated to use infura.
To build, simply enter in the terminal mapped to the root folder the following command:
```
ng server
```
Then, visit <http://localhost:4200> to see the visualization | winning |
## Inspiration
With recent booms in AI development, deepfakes have been getting more and more convincing. Social media is an ideal medium for deepfakes to spread, and can be used to seed misinformation and promote scams. Our goal was to create a system that could be implemented in image/video-based social media platforms like Instagram, TikTok, Reddit, etc. to warn users about potential deepfake content.
## What it does
Our model takes in a video as input and analyzes frames to determine instances of that video appearing on the internet. It then outputs several factors that help determine if a deepfake warning to a user is necessary: URLs corresponding to websites where the video has appeared, dates of publication scraped from websites, previous deepfake IDs (i.e. if the website already mention the words "deepfake"), and similarity scores between the content of the video being examined and previous occurrences of the deepfake. A warning should be sent to the user if content similarity scores between it and very similar videos are low (indicating the video has been tampered with) or if the video has been previously IDed as a deepfake by a different website.
## How we built it
Our project was split into several main steps:
**a) finding web instances of videos similar to the video under investigation**
We used Google Cloud's Cloud Vision API to detect web entities that have content matching the video being examined (including full matching and partial matching images).
**b) scraping date information from potential website matches**
We utilized the htmldate python library to extract original and updated publication dates from website matches.
**c) determining if a website has already identified the video as a deepfake**
We again used Google Cloud's Cloud Vision API to determine if the flags "deepfake" or "fake" appeared in website URLs. If they did, we immediately flagged the video as a possible deepfake.
**d) calculating similarity scores between the contents of the examined video and similar videos**
If no deepfakes flags have been raised by other websites (step c), we use Google Cloud's Speech-to-Text API to acquire transcripts of the original video and similar videos found in step a). We then compare pairs of transcripts using a cosine similarity algorithm written in python to determine how similar the contents of two texts are (common, low-meaning words like "the", "and", "or", etc. are ignored when calculating similarity).
## Challenges we ran into
Neither of us had much experience using Google Cloud, which ended up being a major tool in our project. It took us a while to figure out all the authentication and billing procedures, but it was an extremely useful framework for us once we got it running.
We also found that it was difficult to find a deepfake online that wasn't already IDed as one (to test out our transcript similarity algorithm), so our solution to this was to create our own amusing deepfakes and test it on those.
## Accomplishments that we're proud of
We're proud that our project mitigates an important problem for online communities. While most current deepfake detection uses AI, malignant AI can simply continually improve to counter detection mechanisms. Our project takes an innovative approach that avoids this problem by instead tracking and analyzing the online history of a video (something that the creators of a deepfake video have no control over).
## What we learned
While working on this project, we gained experience in a wide variety of tools that we've never been exposed to before. From Google Cloud to fascinating text analysis algorithms, we got to work with existing frameworks as well as write our own code. We also learned the importance of breaking down a big project into smaller, manageable parts. Once we had organized our workflow into reachable goals, we found that we could delegate tasks to each other and make rapid progress.
## What's next for Deepfake ID
Since our project is (ideally) meant to be integrated with an existing social media app, it's currently a little back-end heavy. We hope to expand this project and get social media platforms onboard to using our deepfake detection method to alert their users when a potential deepfake video begins to spread. Since our method of detection has distinct advantages and disadvantages from existing AI deepfake detection, the two methods can be combined to create an even more powerful deepfake detection mechanism.
Reach us on Discord: **spica19** | ## Inspiration
The prevalence of fake news has been on the rise. It has led to the public's inability to receive accurate information and has placed a heightened amount of distrust on the media. With it being easier than ever to propagate and spread information, the line between what is fact and fiction has become blurred in the public sphere. Concerned by this situation, we built a mobile application to detect fake news on its websites and alert people when information is found to be false or unreliable, thereby hopefully bringing about a more informed electorate.
## What it does
enlightN is a mobile browser with built-in functionality to detect fake news and alert users when the information they are reading - on Facebook or Twitter - is either sourced from a website known for disseminating fake news or known to be false itself. The browser highlights which information has been found to be false and provides the user sources to learn more about that particular article.
## How we built it
**Front-end** is built using Swift and Xcode. The app uses Alamofire for HTTP networking, and WebKit for the browser functionality. Alamofire is the only external dependency used by the front end; other than that it's all Apple's SDK's. The webpage HTML is parsed and sent to the backend, and the response is parsed on the front end.
**Back-end** is built using Python, Google App Engine, Microsoft Cognitive Services, HTML, JavaScript, CSS, BeautifulSoup, Hoaxy API, and Snopes Archives. After receiving the whole HTML text from front-end, we scrape texts from Facebook and Twitter posts with the use of the BeautifulSoup module in Python. Using the keywords of the texts by Microsoft Key Phrase Extraction API (which uses Microsoft Office's Natural Language Processing toolkit) as an anchor, we extract relevant information (tags for latent fake news) from both Snopes.com's Database and the results getting back from the hoaxy API and send this information back to the front-end.
**Database** contains about 950 websites that are known for unreliable (e.g. fake/conspiracy/satire) news sources and about 15 well-known trustworthy news source websites.
## Challenges we ran into
One challenge we ran into was with implementing the real-time text search in order to cross-reference article headlines and Tweets with fact-checking websites. Our initial idea was to utilize Google’s ClaimReview feature on their public search, but Google does not have an API for their public search feature and after talking to some of the Google representatives, automating this with a script would not have been feasible. We then decided to implement this feature by utilizing Snopes. Snopes does not have an API to access their article information and loads their webpage dynamically, but we were able to isolate the Snopes’ API call that they use to provide their website with results from an article query. The difficult part of recreating this API call was figuring out the proper way to encode the POST payload and request header information before the HTTP function call.
## Accomplishments that we're proud of
We were able to successfully detect false information from any site after especially handling facebook and twitter. The app works and makes people aware of disinformation in real-time!
## What we learned
We applied APIs that are completely new for us - Snopes’ API, hoaxy API, and Key Phrase Extraction API - in our project within the past 36 hours.
## What's next for enlightN
Building a fully-functional browser and an app which detects false information on any 3rd party app. We also plan to publicize our API as it matures. | ## Inspiration
The upcoming election season is predicted to be drowned out by mass influx of fake news. Deepfakes are a new method to impersonate famous figures saying fictional things, and could be particularly influential in the outcome of this and future elections. With international misinformation becoming more common, we wanted to develop a level of protection and reality for users. Facebook's Deepfake Detection Challenge, which aims to crowdsource ideas, inspired us to approach this issue.
## What it does
Our Chrome extension processes the current video the user is watching. The video is first scanned using AWS to identify the politician/celebrity in subject. Knowing the public figure allows the app to choose the model trained by machine learning to identify Deepfakes targeting that specific celebrity. The result of the deep fake analysis are then shown to the user through the chrome extension, allowing the user to see in the moment whether a video might be authentic or forged.
Our Web app offers the same service, and includes a prompt that describes this issue to users. Users are able to upload a link to any video they are concerned may be tampered with, and receive a result regarding the authenticity of the video.
## How we built it
We used the PCA (Principal Component Analysis) to build the model by hand, and broke down the video into one second frames.
Previous research has evaluated the mouths of the deepfake videos, and noticed that these are altered, as are the general facial movements of a person. For example, previous algorithms have looked at mannerisms of politicians and detected when these mannerisms differed in a deepfake video. However, this is computationally a very costly process, and current methods are only utilized in a research settings.
Each frame is then analyzed with six different concavity values that train the model. Testing data is then used to try the model, which was trained in a Linear Kernel (Support Vector Machine). The Chrome extension is done in JS and HTML, and the other algorithms are in Python.
The testing data set is comprised from the user's current browser video, and the training set is composed of Google Images of the celebrity.
## Challenges we ran into
Finding a dataset of DeepFakes large enough to train our model was difficult. We ended up splitting the frames into 70% dataset, and 30% of it is used for testing (all frames are different however). Automatically exporting data from JavaScript to Python was also difficult, as JavaScript can not write on external files.
Therefore, we utilized a server and were able to successfully coordinate the machine learning with our web application and Chrome extension!
## Accomplishments that we're proud of
We are proud to have created our own model and make the ML algorithm work! It was very satisfying to see the PCA clusters, and graph the values into groups within the Support Vector Machine. Furthermore, getting each of the components ie, Flask and Chrome extension to work was gratifying, as we had little prior experience in regards to transferring data between applications.
We are able to successfully determine if a video is a deep fake, and notify the person in real time if they may be viewing tampered content!
## What we learned
We learned how to use AWS and configure the credentials, and SDK's to work. We also learned how to configure and utilize our own machine learning algorithm, and about the dlib/OpenCV python Libraries!
Furthermore, we understood the importance of the misinformation issue, and how it is possible to utilize a conjuction of a machine learning model with an effective user interface to appeal to and attract internet users of all ages and demographics.
## What's next for DeFake
Train the model with more celebrities and get the chrome extension to output whether the videos in your feed are DeepFakes or not as you scroll. Specifically, this would be done by decreasing the run time of the machine learning algorithm in the background. Although the algorithm is not as computationally costly as conventional algorithms created by experts, the run time barrs exact real-time feedback within seconds.
We would also like to use the Facebooks DeepFake datasets when they are released.
Specifically, deepfakes are more likely to become a potent tool of cyber-stalking and bullying in the short term, says Henry Ajder, an analyst at Deeptrace. We hope to utilize larger data bases of more diverse deepfakes outside of celebrity images to also prevent this future threat. | partial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.