Goldilocks.png

Goldilocks AI and AR

2018 – Aug 2020

Master’s Thesis on designing invisible interfaces. Goldilocks explores how contextual computing and AI can reduce information overload. Led by 8 guiding principles, Goldilocks emphasizes the value of technology that disappears.

See Framework › Read thesis ›

 

A world filled with stimuli.

We carry the world’s information in our pocket, yet struggle to navigate the constant stream of content. We are at a pivotal moment in human history where knowledge is abundant, but lacks curation. How can computers enhance our capabilities while nurturing our digital wellness? It is time to move beyond the information era.

Goldilocks aims to harness the power of ubiquitous computing and artificial intelligence, inviting a new era of wearable computers. This evolution reshapes our interactions with the world at every moment, making technology imperceptible and our lives effortless.

 
 
 
Summary of the eight ubicomp design guidelines

Invisible Computing. Summary of principles.

Goldilocks explores how we can harness the power of augmented reality, artificial intelligence, and ubiquitous computing to reshape our everyday interactions, merging technology seamlessly into our lives.

Interfaces that reduce visual noise. Our world was designed to provide the maximum amount of information to assist the most people. This can feel overwhelming.

Spatial computers can help us filter busy environments so we can focus our attention on what it is most helpful.

There are two images. On the left is today's subway stop with a busy crowd and complex signage. On the right is an AR alternative where a light blue glow highlights the correct train with an AR personalized sign directing the user.

Interfaces that blend in. Invisible interfaces present information without taking over your attention. Building on our habits or existing mental models, these interfaces can merge seamlessly with the world.

In snap moments —rushing to board a train in a busy station— invisible interfaces can provide calm, pinpoint nudges without requiring us to pull our attention away from what is directly in front of us.

Interfaces that learn and adapt. Spatial computers are real world translators. They help make the world clear and intuitive.

This can be particularly helpful if your vision or hearing is limited. Not only can physical spaces become more accessible, they can be curated to the needs of each individual.

See an example of an ambient assistant +

Shows an app interface and an AR interface. The app helps direct users to find food items. The AR overlays simple explanations over complex ingredient information with personalized results such as identifying whether the item is vegan.

Interfaces that eliminate ambiguity. The world’s information filtered for you. Is this loaf bread vegan? Where’s the tortillas?

Your contextual computer can learn your preferences and parse through the endless data, helping you move through life —or the grocery store— so you can find what you need with ease.

Three images of AR interfaces on a subway for navigating your route. The first is a pop up that covers your FOV. The second interface blends into the subway train. The third is projected onto your hand.

Interfaces that work with you. Our moment to moment experiences change constantly. Interfaces should adapt as well.

Map or transit interfaces might display gently atop existing signage, where it is available to see any time. If you need more info, find your trip details with a gesture of the hand.

Interface showing larger alerts when contextually desired. For example in this case the driver is looking for accessible parking at an airport.

Interfaces you can rely on. Most importantly, our devices must maintain our trust, displaying crucial context only when requested and necessary. It is important to adapt the intensity of visuals based on each individual and occasion.

Show just the right amount of interface.

 

 

Genuine experiences. Computing in motion.

See Framework › Read thesis ›