<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>Sangwoo Han</title>
	<link>https://sangwoohan.cargo.site</link>
	<description>Sangwoo Han</description>
	<pubDate>Tue, 24 Feb 2026 05:14:18 +0000</pubDate>
	<generator>https://sangwoohan.cargo.site</generator>
	<language>en</language>
	
		
	<item>
		<title>Index</title>
				
		<link>https://sangwoohan.cargo.site/Index</link>

		<pubDate>Mon, 04 Nov 2024 04:20:10 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Index</guid>

		<description></description>
		
	</item>
		
		
	<item>
		<title>Replit Agent Builder</title>
				
		<link>https://sangwoohan.cargo.site/Replit-Agent-Builder</link>

		<pubDate>Tue, 24 Feb 2026 05:14:18 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Replit-Agent-Builder</guid>

		<description>Replit Agent BuilderOrganization: ReplitTime: 10.2025 - 2.2026Project Team: Sangwoo Han (Product Designer), Amadeo Pellicce (Product Manager, Tech Lead), Nick Ondo (AI Engineer), Phil MacEachron (Frontend Engineer), Ajay Nayak (Product Engineer)
&#60;img width="2256" height="1616" width_o="2256" height_o="1616" data-src="https://freight.cargo.site/t/original/i/fc23b862d5a740a32def81730eb7c9714ee94bf267c544909ae1fb137c5a4856/Screenshot-2026-02-23-at-9.22.00PM.png" data-mid="245265374" border="0"  src="https://freight.cargo.site/w/1000/i/fc23b862d5a740a32def81730eb7c9714ee94bf267c544909ae1fb137c5a4856/Screenshot-2026-02-23-at-9.22.00PM.png" /&#62;What is Replit Agent Builder?Replit’s Agent Builder (Externally knows as Automation Stack)&#38;nbsp;lets users build and run multi-step, production-ready workflows using natural language. Powered by the Replit Agent and Mastra AI, it translates intent into event-driven automations with triggers, logic, integrations, and scheduled execution—making complex process orchestration accessible without traditional coding.





ContextThis initiative began as an experimental hack by an AI engineer to explore whether Replit Agent could generate and orchestrate agents using Mastra and Inngest. As adoption and internal excitement grew, it became clear that formalizing and productizing the concept could meaningfully expand Replit’s existing app stack, turning a proof of concept into a scalable automation layer within the platform.




My Roles

Owned and designed the end-to-end product experience


Shaped product vision and go-to-market strategy


Partnered with internal teams and external customers to validate high-impact use cases and drive leadership buy-in



















Try the final design prototype
Key DecisionsFormalized Agent + Mastra + Inngest into a reusable automation layer, not just agent chaining


Designed for event-driven, production-grade workflows (triggers, async execution, error handling)


Avoided deterministic module authoring — Replit is vibe-coding native, and modules are AI-generated


Reframed the flowchart as a debugging &#38;amp; transparency layer, not a primary builder


Added natural language descriptions at the agent and module level to preserve clarity without sacrificing simplicity







ImpactSuccessfully launched; 100K automations created within the first week


Elevated a single-engineer experiment into a platform-level automation capability


Expanded Replit beyond app generation into workflow orchestration


Established the foundation for Replit’s evolution from an app-building tool to a “build everything” platform


Drove internal alignment and leadership momentum toward long-term product expansion










</description>
		
	</item>
		
		
	<item>
		<title>Little Bird</title>
				
		<link>https://sangwoohan.cargo.site/Little-Bird</link>

		<pubDate>Wed, 14 May 2025 04:28:23 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Little-Bird</guid>

		<description>Little Bird
Organization: Little Bird
Time: 01.2025 -&#38;nbsp;
Project Team: Sangwoo Han (Product Designer), Alex Green (CEO, Co-founder),&#38;nbsp;Dmitriy Vasilyuk (AI Engineer),&#38;nbsp;Victor Aremu, Rahul Ranjan, Sibi Sharanyan (Front-end Engineers)

&#60;img width="3162" height="1848" width_o="3162" height_o="1848" data-src="https://freight.cargo.site/t/original/i/ced7fc2b3652ae8b91330d21d525e99df94b7121d19691dd3f19bb833be1d0a5/All-view.png" data-mid="233172104" border="0"  src="https://freight.cargo.site/w/1000/i/ced7fc2b3652ae8b91330d21d525e99df94b7121d19691dd3f19bb833be1d0a5/All-view.png" /&#62;
What is Little Bird?Little Bird is an AI personal assistant architected for task, project, and idea management. Its operational core involves leveraging macOS screen reader accessibility features to programmatically interpret and fetch data from various user-permitted application sources, rather than relying on direct application integrations. This method of data acquisition allows Little Bird to build contextual understanding from content such as notes and messages. Based on this derived context, the system facilitates proactive deadline reminders, information retrieval, and brainstorming support. User interaction is conducted via natural language processing. The overarching design objective is the enhancement of organizational efficiency and productivity.
Try Little Bird
The Challenge: Redefining AI Interaction
Traditional AI assistants often feel fragmented or purely transactional. The challenge was to design an experience where AI conversations felt like the central, most intuitive way to interact, rather than an add-on feature. How could we make complex AI capabilities easily accessible and genuinely helpful in a user-friendly interface?My Role &#38;amp; Responsibilities
As a Product Designer on the Little Bird project, I was responsible for:


Shaping Product Strategy: Collaborating with product managers and engineers to define the vision and strategic direction for Little Bird, focusing on user needs and market opportunities.
End-to-End Product Design: Leading the design process from initial concept and research through to detailed UI/UX design, prototyping, and final visual design. This included user flows, wireframes, high-fidelity mockups, and interactive prototypes.
Building Design Systems: Developing and implementing a comprehensive design system to ensure consistency, scalability, and efficiency across the Little Bird platform.
The Approach &#38;amp; Design ProcessLittle Bird’s three key feature areas
&#60;img width="554" height="239" src="https://freight.cargo.site/w/554/q/94/i/30b0e178e18958bc64b2c1e6e48186b55e3ce54e015126363ee81da3712bb630/Frame-1707480329.png"&#62;
&#60;img width="554" height="239" src="https://freight.cargo.site/w/554/q/94/i/90a37527fca99fa83a374924700284b7abd8d34284b535a22cf0a23021e29281/journallist-view-final.png"&#62;
&#60;img width="554" height="239" src="https://freight.cargo.site/w/554/q/94/i/91b4626676d314b6aa693a7ab9171c865cbe58f37de99b58b3c09fbe8fe3ed24/987.png"&#62;
Chat : An LLM-powered conversational interface. Provides context-aware responsesTasks: AI-powered task suggestions based on user contexts. Offers lightweight task management systemJournals: Intelligent daily logging that automatically captures and organizes important moments, decisions, and progress
Before State
The original design had Chat, Tasks, and Journals siloed in separate tabsWhile each feature showed promise individually, they weren't working together as a unified experience, limiting their collective potentialLLM products’ power lies in their unpredictability and fluidity, yet this same quality makes them difficult to approach.Although any attempt to reduce this cognitive load through structured prompts or chat recommendations inevitably constrains the very flexibility that makes LLMs revolutionaryWhat does it mean to design for an LLM product, when the interface itself is the technology?Design GoalsApproachable: Build frameworks that make AI more approachable while preserving its full potentialSystematic: Create foundational design patterns that grow with the productAI minimalism: Simplify through AI-driven interactions, not interface elementsConceptualizing the "Dual Layout": The Heart of Little Bird
To make AI-powered conversations the core experience, we conceptualized and designed a unique "Dual Layout." This layout separates conversational history and active input/output areas, allowing users to easily follow the flow of interaction while simultaneously seeing contextual information or results. The primary goal was to reduce cognitive load and make interactions feel more fluid and intuitive than traditional chat interfaces, as well as provide “materials for conversations” with AI.
Developing the Little Bird Design System
To ensure consistency and scalability, I spearheaded the development of a comprehensive design system. This included defining color palettes, typography, iconography, spacing guidelines, and a library of reusable UI components. The design system accelerated the design and development process and ensured a cohesive user experience across all touchpoints.
</description>
		
	</item>
		
		
	<item>
		<title>Physics for AR Engine</title>
				
		<link>https://sangwoohan.cargo.site/Physics-for-AR-Engine</link>

		<pubDate>Wed, 06 Nov 2024 04:37:19 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Physics-for-AR-Engine</guid>

		<description>Physics for AR EngineOrganization: Meta Reality LabsTime: 01.2023 - 06.2023Project team: Sangwoo Han (Design Lead), Cameron Sylvia (Product Manager), Rusty Koonse(Engineer)

&#60;img width="1823" height="934" width_o="1823" height_o="934" data-src="https://freight.cargo.site/t/original/i/90cf33015c12bc6b612f0a389efb9f2fecce73d0fdc08213ca66ec53610eb10a/Frame-41.png" data-mid="221256057" border="0" data-scale="85" src="https://freight.cargo.site/w/1000/i/90cf33015c12bc6b612f0a389efb9f2fecce73d0fdc08213ca66ec53610eb10a/Frame-41.png" /&#62;
0. What is AR Engine?AR Engine is a software framework driving mixed reality experiences across multiple devices and platforms at Meta. Supported by a team of over 20 engineers, AR Engine collaborates closely with partner product teams to develop core capabilities such as rendering, spatial recognition, sound, and animations.

As the design lead for AR Engine, my main role is to guide designers and engineers from partner teams in implementing these features through workshops, prototyping, and best practices.
1. Project goals
The engineering team had been working on adding physics capabilities to AR Engine as part of an effort to transition the platform from supporting mobile experiences (such as Instagram filters) to powering AR Glasses (Orion). With this new capability enabled, the project began with an open question: What meaningful user value can physics bring to the Orion platform?

To navigate this challenge, I established three primary goals:
Organize workshops to generate ideas and narrow them down to a few prototype concepts.Showcase the design strategy by building a working prototype on Spark using the WIP library (JavaScript).Share the design strategy with Orion’s product teams, assist with technology integration, and collaborate on best practices.
2. Design prototype
To generate a wide range of promising ideas, I conducted multiple rounds of workshops with stakeholders, using specific frameworks to guide the discussions.
&#60;img width="2334" height="1532" width_o="2334" height_o="1532" data-src="https://freight.cargo.site/t/original/i/124d62a1d8a979a73b86ebb9967e8bf65002f005006adadccb8b4050b6cc6668/Screenshot-2024-11-13-at-11.26.55.png" data-mid="221722112" border="0" data-scale="72" src="https://freight.cargo.site/w/1000/i/124d62a1d8a979a73b86ebb9967e8bf65002f005006adadccb8b4050b6cc6668/Screenshot-2024-11-13-at-11.26.55.png" /&#62;
To narrow down the ideas, I defined a set of criteria—or "razors"—based on project goals and technical feasibility. The prototype needed to meet the following standards:
Solve real problems: The prototype should demonstrate how physics can provide meaningful solutions to real-world problems on the Orion platform.Be inspiring: For the technology to be adopted, the prototype must go beyond a simple tech demo. It should spark new conversations and collaborations, paving the way for additional features, design guidelines, and best practices.Be lightweight: Physics processing is resource-intensive in AR, especially with Orion’s UX model where multiple virtual objects coexist. To avoid overloading the system, the prototype should limit physics elements to only a few key components.
One use case that emerged was an e-commerce demo allowing users to virtually try out physical products. This demo was intentionally lightweight, with only a few physics-enabled elements. The result: 
the demo was published to the internal content library, and I collaborated with the entertainment and utility teams to further explore its implementation.

3. Additional problem - tooling
While working on the prototype, I identified a major issue with the new API design affecting the workflow for Spark creators. Similar to other 3D engines like Unity or Unreal, Spark has two complementary development environments: visual editing and scripting.

&#60;img width="3624" height="1274" width_o="3624" height_o="1274" data-src="https://freight.cargo.site/t/original/i/8ad278f477d83d9db67e568560a64b54962c09f75858f38f2592e5bb4233ce3e/sparkeditor.png" data-mid="221741057" border="0"  src="https://freight.cargo.site/w/1000/i/8ad278f477d83d9db67e568560a64b54962c09f75858f38f2592e5bb4233ce3e/sparkeditor.png" /&#62;
Typically, a Spark creator begins by building a 3D world in the Visual Editor, then moves into scripting to add advanced interactions and animations, as outlined below.
&#38;nbsp;&#60;img width="1101" height="475" width_o="1101" height_o="475" data-src="https://freight.cargo.site/t/original/i/34dd9cae2e45809445d30d09706dadc643b70e95ea5a9db2503505caa0def4bb/visto.png" data-mid="221741660" border="0" data-scale="80" src="https://freight.cargo.site/w/1000/i/34dd9cae2e45809445d30d09706dadc643b70e95ea5a9db2503505caa0def4bb/visto.png" /&#62;
However, because the physics API was developed in isolation by the AR Engine team, its approach did not align with the existing principles. An experienced Spark creator would expect to add physics to scene objects as a child parameter, like this:
const sceneBall = await Scene.root.findFirst("Ball");
...
sceneBall.physics.body.controlType = Physics.controlType.PHYSICS_CONTROL;
In contrast, the physics APIs developed by the AR Engine engineers were packaged in a separate class. As a result, creators had to create invisible rigid body objects, like this:
let physicsBall = world.create(

&#38;nbsp; &#38;nbsp;{

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;name: "Bouncy Ball",

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;type: "BODY",

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;bodyType: "BODY_RIGID",

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;controlType: "PHYSICS_CONTROL",

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;collisionResponse: "SOLID",

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;restitution: 0.9, // Bouncy!

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;massProps: {

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;mass: 0.1,

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;},

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;sleepThresholds: [0.8, 1.0], // Linear then angular velocity sleeping thresholds

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;belongsToGroups: [0],

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;collidesWithGroups: [0],

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;collisionEventEnabled: false,

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;collisionVolume: collisionBall.id,

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;transform: {

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;translation: [0, 0.25, 0.7],

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;&#38;nbsp;rotationQuaternion: [0, 0, 0, 1],

&#38;nbsp; &#38;nbsp;&#38;nbsp;&#38;nbsp;},

&#38;nbsp; &#38;nbsp;});



And then synchronize the position and the scale of the scene objects with the invisible rigidbody through script.
sceneBall.transform.translation = physicsBall.transform.translation;
This API design not only makes creating physics bodies incredibly difficult but also makes UI design for the Visual Editor nearly impossible. To address this, I took the initiative to design the user experience for Spark’s Visual Editor in collaboration with the Spark design team. The project I proposed had two main objectives:

Enable physics for the Spark creator community in a way that integrates seamlessly with the existing workflow and mental model.Bring the resulting design model back to the AR Engine team to inform the API redesign.
A key design challenge was categorizing physics parameters and defining their behaviors to ensure the features could be accessed intuitively by creators.

&#60;img width="1572" height="740" width_o="1572" height_o="740" data-src="https://freight.cargo.site/t/original/i/748f90e4f1a550b8224cbd5aceb95f934aeb47d8e9cfc219c55b033790fab421/Screenshot-2024-11-13-at-9.45.36PM.png" data-mid="221751814" border="0"  src="https://freight.cargo.site/w/1000/i/748f90e4f1a550b8224cbd5aceb95f934aeb47d8e9cfc219c55b033790fab421/Screenshot-2024-11-13-at-9.45.36PM.png" /&#62;
&#60;img width="1454" height="820" width_o="1454" height_o="820" data-src="https://freight.cargo.site/t/original/i/c3bfd68c2e4cb93265ddaf3514f7b7103dea912ed5ae8034ad35295cca92366d/Screenshot-2024-11-13-at-9.45.30PM.png" data-mid="221751816" border="0"  src="https://freight.cargo.site/w/1000/i/c3bfd68c2e4cb93265ddaf3514f7b7103dea912ed5ae8034ad35295cca92366d/Screenshot-2024-11-13-at-9.45.30PM.png" /&#62;
Examples of studies for categorization
The following final designs were shipped, and the api designs were updated accordingly.

&#60;img width="1420" height="790" width_o="1420" height_o="790" data-src="https://freight.cargo.site/t/original/i/6f8cf254bbc3498cced1319cf4a4b0ad7b2494a9c774fd3af7efa94617ba5bd1/Studio-State-Machine1.png" data-mid="221751919" border="0"  src="https://freight.cargo.site/w/1000/i/6f8cf254bbc3498cced1319cf4a4b0ad7b2494a9c774fd3af7efa94617ba5bd1/Studio-State-Machine1.png" /&#62;
&#60;img width="1420" height="790" width_o="1420" height_o="790" data-src="https://freight.cargo.site/t/original/i/45c56940434e62f15c491d9c9ed2b4a898b74a683544b81f18dc4752d14098fd/Studio-State-Machine.png" data-mid="221751920" border="0"  src="https://freight.cargo.site/w/1000/i/45c56940434e62f15c491d9c9ed2b4a898b74a683544b81f18dc4752d14098fd/Studio-State-Machine.png" /&#62;


4. Impact
The project had largely three outcomes.
Initiated smaller-sized AR game projects inspired by the Mini basket ball demo
Validated e-commerce use cases in the perspective of ease-of-development for 3rd parties
Delivered the appropriate tools for Spark Studio, including Visual editor, patch editor, and scripts
The physics api was updated based on the mental model I created to the visual editor
</description>
		
	</item>
		
		
	<item>
		<title>AR 3D Visual Messaging</title>
				
		<link>https://sangwoohan.cargo.site/AR-3D-Visual-Messaging</link>

		<pubDate>Tue, 22 Oct 2024 03:54:56 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/AR-3D-Visual-Messaging</guid>

		<description>3D Visual messaging for Orion (AR Glasses)
Organization: Meta Reality Labs
Time: 02.2022 - 04.2022
Project team: Sangwoo Han (Design Lead), Jessica Abad Kelly (Product Manager), Zeke Brill (Design Prototyper), Julia Moore (UX Researcher)
&#60;img width="2627" height="1479" width_o="2627" height_o="1479" data-src="https://freight.cargo.site/t/original/i/e20607e8fec14f3ba8cfd4c282e5202d8e11fc4676c5c9bd41a07b49af613970/Frame-288599.png" data-mid="220429603" border="0" data-scale="76" src="https://freight.cargo.site/w/1000/i/e20607e8fec14f3ba8cfd4c282e5202d8e11fc4676c5c9bd41a07b49af613970/Frame-288599.png" /&#62;
1. Project goals
Orion is Meta’s first true Augmented Reality glasses. Equipped with the cutting-edgy technology, the device aims to bring groundbreaking changes to the realm of wearable devices and spatial computing. As a product designer in Orion’s communication team, I was tasked to explore net new communication features for the platform. The key project goals wereDefine new communication leveraging 3D visual content features for OrionWork with UXR to conduct initial validationsDeliver the design strategy for partner design teams to further developement


&#60;img width="2144" height="939" width_o="2144" height_o="939" data-src="https://freight.cargo.site/t/original/i/e1345aec18201b10d4846f58b0c68a7ae7e54821844a04f06b13e61ceb004849/461317573_857494119820299_5813342043919318034_n-1-copy.jpg" data-mid="220429913" border="0" data-scale="83" src="https://freight.cargo.site/w/1000/i/e1345aec18201b10d4846f58b0c68a7ae7e54821844a04f06b13e61ceb004849/461317573_857494119820299_5813342043919318034_n-1-copy.jpg" /&#62;


2. Navigating the technology and user values
In this project, there are a few technical as product aspects of Orion that played key roles.
3D &#38;amp; surface aware: The content can be rendered sptially over the real world.
Seamlessly interactive: Through the EMG wrist band that detects hand gestures, users can interact with virtual content as if they are physical objects
Highly personal: Orion is set to provide highly customized computing environments that reflects user’s preferences

On the other hand, I dove deeper into user research studies on messaging needs for Meta’s family of apps(Messenger, Whatsapp, and Instagram Direct) to understand what user are looking for in terms of visual messaging.&#38;nbsp; From auditting, I’ve found out there are largelty two categories of visual messaging in asynchronous digital communications.Photo/video messaging : 
Photos and videos that capture and share real-time events
Filters and decorative stickers to add more expressiveness
Expressive message 
Gifs, stickers, emoji, animoji, reactions, ..
Tend to be ephemeral, consumed in the context of current conversation
Used to clarify the emotional meaning that’s hard to be captured in a text

Based on the insights, I’ve come up with two themes for further design explorations.

&#60;img width="2276" height="1072" width_o="2276" height_o="1072" data-src="https://freight.cargo.site/t/original/i/dd893d2410f0e2ee07a8cd71fd36c44313930a680f30e833ada0f85ad73e23c9/Screenshot-2024-10-22-at-10.36.00.png" data-mid="220433063" border="0" data-scale="84" src="https://freight.cargo.site/w/1000/i/dd893d2410f0e2ee07a8cd71fd36c44313930a680f30e833ada0f85ad73e23c9/Screenshot-2024-10-22-at-10.36.00.png" /&#62;
3. Create to express - Hologram message
Using Orion’s inward-facing camras, holographic recordings can offer users unique opportunities in expressiveness with abilities to deliver realistic capture of a self as if they are together in person.

User scenario examplesSend a special holographic birthday message with firework effectsShow off my new haircut
Success factors for holographic recordingsEase of capturing a hologram using StageAbility to easily edit and share after capturingVariety of options of AR effects to choose from that uniquely empowers my holograms

Sender side user flow
&#60;img width="8684" height="3713" width_o="8684" height_o="3713" data-src="https://freight.cargo.site/t/original/i/e678f7c91aeee20f499108d8f2f846a235954a48cc75e74a5efc5338173c02e7/Frame-288619.png" data-mid="220450751" border="0"  src="https://freight.cargo.site/w/1000/i/e678f7c91aeee20f499108d8f2f846a235954a48cc75e74a5efc5338173c02e7/Frame-288619.png" /&#62;


Receiver side user flow&#60;img width="8769" height="1594" width_o="8769" height_o="1594" data-src="https://freight.cargo.site/t/original/i/d07b8f10dfe24d9c5822d150a790e5bb35aa40b9a4316e708bb313dd77114f43/Frame-288620.png" data-mid="220450771" border="0"  src="https://freight.cargo.site/w/1000/i/d07b8f10dfe24d9c5822d150a790e5bb35aa40b9a4316e708bb313dd77114f43/Frame-288620.png" /&#62;
4. Amplify my communication - AR Stickers

Ephemeral content with relatively lower footprint that users can quickly select and send to clarity their emotional meanings that’s hard to be captured in a text
Meaning can be applied rather than inherent for certain relationships. When meaning is applied, the motivation for use is unique to the people and conversation)
Used as approximations of subtle nuance of in-person communication (body language, facial expressions, and tone of voice)
Fun, humorous, cute, playful features are subject to a novelty effect more so than those being used to add meaning
Success factorsVariety of content to choose fromEase of sending and receivingOpportunities to spark further conversations

&#60;img width="2488" height="1090" width_o="2488" height_o="1090" data-src="https://freight.cargo.site/t/original/i/42f29dfb27f805c38fdfe4ec25059d58276688f8430c07c837d1a5e9babe1750/Screenshot-2024-10-22-at-11.06.27.png" data-mid="220433139" border="0" data-scale="88" src="https://freight.cargo.site/w/1000/i/42f29dfb27f805c38fdfe4ec25059d58276688f8430c07c837d1a5e9babe1750/Screenshot-2024-10-22-at-11.06.27.png" /&#62;


Sender side user flow
&#60;img width="8769" height="1606" width_o="8769" height_o="1606" data-src="https://freight.cargo.site/t/original/i/9d03d62872e255288d09eaa9bbe53cf70b175c5a49d11411f8369ff40a99337b/Frame-288621.png" data-mid="220451616" border="0"  src="https://freight.cargo.site/w/1000/i/9d03d62872e255288d09eaa9bbe53cf70b175c5a49d11411f8369ff40a99337b/Frame-288621.png" /&#62;


Receiver side user flow
&#60;img width="8769" height="1593" width_o="8769" height_o="1593" data-src="https://freight.cargo.site/t/original/i/a4d614c95010c916a0cb377528b13964cc649365dc931673d4eb45fbf8c4ef23/Frame-288622.png" data-mid="220451621" border="0"  src="https://freight.cargo.site/w/1000/i/a4d614c95010c916a0cb377528b13964cc649365dc931673d4eb45fbf8c4ef23/Frame-288622.png" /&#62;


Additional content types
&#60;img width="6664" height="6388" width_o="6664" height_o="6388" data-src="https://freight.cargo.site/t/original/i/10a39112f23b77ad5e4106c2640bc63d0fd7a6cf2b92368d77a229bbac9e27a8/Frame-288627.png" data-mid="220451671" border="0"  src="https://freight.cargo.site/w/1000/i/10a39112f23b77ad5e4106c2640bc63d0fd7a6cf2b92368d77a229bbac9e27a8/Frame-288627.png" /&#62;



5. UXR Takeaways
&#60;img width="538" height="538" width_o="538" height_o="538" data-src="https://freight.cargo.site/t/original/i/2a7364c89af4ffbff1d9910220e346129a829b58e4e99b762b3ffa7879e86f9d/comms.gif" data-mid="221255146" border="0"  src="https://freight.cargo.site/w/538/i/2a7364c89af4ffbff1d9910220e346129a829b58e4e99b762b3ffa7879e86f9d/comms.gif" /&#62;UXR Prototype
Participants liked the variety of options for asynchronous communication, and imagined selecting one based on context, audience, and personal preference.⇨ Consider tapping into the Spark creator community to build a content platform, as well as working with vendors to create pop-culture inspired stickersParticipants suggested that the AR stickers in the concepts were fun but not very reusable, and some wanted more customization options.⇨ Conduct follow-up UXR with MVP around retention and customization needsParticipants found the end-to-end flows to be intuitive and seamless, including the asymmetrical concepts, though a few were concerned with the number of steps required to send/receive AR stickers.⇨ Consider designing a system leveraging AI technology to recommend the most appropriate stickers based on the context of the conversation
</description>
		
	</item>
		
		
	<item>
		<title>AR Glasses interaction prototype</title>
				
		<link>https://sangwoohan.cargo.site/AR-Glasses-interaction-prototype</link>

		<pubDate>Sun, 11 Oct 2020 23:55:19 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/AR-Glasses-interaction-prototype</guid>

		<description>AR Glasses interaction prototype
Personal projectTime : Oct. 2020
&#60;img width="2560" height="1440" width_o="2560" height_o="1440" data-src="https://freight.cargo.site/t/original/i/423836be1ce556a1b9822ae4d283aa559a93782d61c094cccf5f8af5b21a61c8/Screen-Shot-2020-10-11-at-5.07.01-PM.png" data-mid="85296966" border="0"  src="https://freight.cargo.site/w/1000/i/423836be1ce556a1b9822ae4d283aa559a93782d61c094cccf5f8af5b21a61c8/Screen-Shot-2020-10-11-at-5.07.01-PM.png" /&#62;



1. Inspiration
Hololens’ gesture input interactions leverage the hardware's robust sensors to allow users to directly interact with computer-generated objects that are virtually placed in the real world with bare hands. The fact that users can see and touch these virtual objects as if they are as tangible as the real objects seen in their mixture opens up the door to the ideal world where every interface is freed from screens and becomes invisible. I believe that's what 'augmenting reality' truly means; it is about empowering us as a human being, who has become the resident of both physical and virtual worlds in the past 30 years by merging these two. It is about distributing information and meanings that have been only able to present in monitors to where they belong.

The interaction models that have been offered by the current AR headset devices work great, although there is still room to improve. This particular interaction demo aims to suggest an improvement to it in the context stated above.

2. Everything has to be in the field of view
&#60;img width="1920" height="1080" width_o="1920" height_o="1080" data-src="https://freight.cargo.site/t/original/i/01f5780fb72d76806624585392210cf939dc259c0d40f1528d2959184aa05f9a/Hololens.jpg" data-mid="85306341" border="0"  src="https://freight.cargo.site/w/1000/i/01f5780fb72d76806624585392210cf939dc259c0d40f1528d2959184aa05f9a/Hololens.jpg" /&#62;Currently, the gesture detection technology in the most AR headsets relies on the headset device itself, which causes a few problems.
1. User has to raise their hands to the level of the eyes
Since all the virtual objects would be displayed in the field of view, users intensionally would raise their hands to the front to have them recognized by the device and interact with the virtual objects. From my previous explorations on the wearable smartwatch, this is an unnatural, tiring gesture that users would consider it as an intentional performance rather than an intuitive interactive experience.
A good example is Hololens' Bloom. As much as it looks magical to summon a contextual menu with this gesture, it is often awkward, slow, and takes efforts to perform successfully.2. Hand gesture control is on the heavy-duty
In the current interaction paradigm of AR headset, the users' are burdened with executing both targetting and interacting with their hand movements. Let's take a quick look at the following two scenarios taking place in the near future.

I preserve my tea leaves in a container that can communicate with my AR glasses. When I'm running low on the tea, the container displays the indicator on the AR layer. I can walk up to the container, tap on the indicator to summon the virtual UI layer. From there, I can quickly order a refill by interacting with them.I want to turn off my smart lamp, which can be controlled with my hand gesture. I can raise my arm and point at the lamp and swipe downwards to turn it off.
In both cases, the user has to target the object he/she wants to interact with ( place the finger on the buttons on the virtual UI layer, aim at the lamp to turn off with the arm) through the gesture, and then execute the interaction. There are two noticeable problems with this model:

Users have to be a lot more intentional with their gestures. The targeting stage is particularly challenging because they couldn't get haptic feedback to know whether they have successfully selected a target to interact with. The virtual UIs still have to rely on screen-based interaction models such as buttons within a pop-up dialog box. On top of the fact that it would be challenging to interact with them, this model can easily overpopulate virtual space with lots of small screens.

Can this interaction be more intuitive, fast, ubiquitous and 'magical'? 
More importantly, how can we make the user the hero, not the product? 


3. Just take a lookThe new interaction model suggested in this research leverages the eye-tracking technology for targetting. This will allow the user to specify the object to interact with by just looking at it and execute the interaction with simple gestures with the arms down.&#38;nbsp;Before jumping into user scenarios, I have made a few technical assumptions.

The AR glasses that the user would wear have a reliable eye-tracking technology that tracks the direction and the depth.The user would be wearing a companion device that tracks the hand movements outside of the field of view of the AR glasses' camera or a technology that can deliver the equivalent experience.Storyboard
&#60;img width="1536" height="2048" width_o="1536" height_o="2048" data-src="https://freight.cargo.site/t/original/i/253a44720cf4cb2e6e6ead75b4e4dd60a21be9a39a1fcf20eba25ce89e8e1a19/unnamed.png" data-mid="85729240" border="0" data-scale="53" src="https://freight.cargo.site/w/1000/i/253a44720cf4cb2e6e6ead75b4e4dd60a21be9a39a1fcf20eba25ce89e8e1a19/unnamed.png" /&#62;
Scenario video demo


Interaction prototype
This prototype has been built in Unity to create the testing environment that is close to the model suggested in the scenario and the demo video. Leap motion was used to for the gesture tracking.
</description>
		
	</item>
		
		
	<item>
		<title>Bot Framework Composer</title>
				
		<link>https://sangwoohan.cargo.site/Bot-Framework-Composer</link>

		<pubDate>Thu, 19 Mar 2020 04:52:38 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Bot-Framework-Composer</guid>

		<description>Bot Framework Composer
Organization : Microsoft FUSE LabTime : May. 2019 -Project Team : Sangwoo Han (Lead Designer)
&#60;img width="1440" height="1077" width_o="1440" height_o="1077" data-src="https://freight.cargo.site/t/original/i/46a29a9b36d8251bb2fd711cf11c65ca6147b0255145e2307dc55a897d9eedb9/overview.png" data-mid="86183686" border="0"  src="https://freight.cargo.site/w/1000/i/46a29a9b36d8251bb2fd711cf11c65ca6147b0255145e2307dc55a897d9eedb9/overview.png" /&#62;

Try Bot Framework Composer

1. Project goal
Bot Framework Composer is an open-source, integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Bot Framework SDK&#38;nbsp;.
Composer consists of largely three parts;
Conversation flow designer : Users can visually author conversation flows on the flow editor.Language understanding : Users can author training data for Natural Language Understanding(NLU) machine learning models in the context.&#38;nbsp;Language generation : Users can author bot’s dynamic responses with Language generation markdown.

On the highest level, the Composer project’s design goals are&#38;nbsp;
Deliver integrated, visual coding tool for developer audience to unlock the full potential of Bot Framework SDK, lower the learning curve, and fasten the development process that was unable when developers only work with raw codes.Design for a none-developer audience so that content writers and conversation designers can also be part of the bot development process more inclusively. This would allow a multi-disciplinary team to collaborate more effectively.
2. User research

&#60;img width="7287" height="5487" width_o="7287" height_o="5487" data-src="https://freight.cargo.site/t/original/i/22e267bd1ce3ffbd1e1dc8cba192ffd6f8796bce469a1663368aed5476ec973a/MSFT_JM_Overview_Final-copy.jpg" data-mid="64012830" border="0"  src="https://freight.cargo.site/w/1000/i/22e267bd1ce3ffbd1e1dc8cba192ffd6f8796bce469a1663368aed5476ec973a/MSFT_JM_Overview_Final-copy.jpg" /&#62;
In collaboration with Blink, we had conducted user research sessions that include all the possible stakeholders in bot development teams. Due to the fact the field is still young from the product development point of view, there is no standardized team compositions or development processes. In that sense, we focused more on team members’ capabilities, responsibilities and tasks rather than their job titles
P0 user - developers : Capable of learning new SDKs and apply them to create fully implemented bot applications.&#38;nbsp;P1 user - conversation designer : Capable of learning new software development tools and use them to design the end-user experience(bot dialog logics, bot responses, and NLU models).P2 user - content writers : Capable of writing bot dialogs and scenarios that serve end-user, business, and product needs.
What cannot be stressed more is that these should be understood as more of roles that members of a bot development team play, rather than individual users. Some teams would have one individual playing all three roles while other teams could, or should get dedicated members for each role.&#38;nbsp;
The following are selected case studies.
3. Case study - Application interaction modelBot framework as a software stack was designed with a specific idea; adaptive dialog.
Adaptive dialog consists of roughly three key elements.
Dialogs : A dialog is the centerpiece of adaptive dialog. When it is called, it takes over the current context and has ‘triggers’ that belong to it ready.Triggers : A trigger a sub-element of a dialog that can be activated by various conditions. One of the major purposes is to recognize user’s inputs, and fire action nodes attached to it to deliver conversation logics.Action nodes : Action nodes are atoms of conversation logic. A conversation flow created with action nodes is attached to a trigger and gets started by the trigger activation.
Since these are software design concepts, they are not necessarily intuitive tools to design conversation. While Composer’s main audience is still bot developers, it is critical for the design team to consider how this tool can be a platform for a collaboration amongst developers, writers, and designers to serve the multidisciplinary nature of bot building. Another aspect that played a huge role is that fact that Composer is a web app that is meant to be customized by customers to meet their needs. Therefore the interaction model had to be nailing down key technical concepts of adaptive dialog as well as providing room for scaling.

&#60;img width="1440" height="1077" width_o="1440" height_o="1077" data-src="https://freight.cargo.site/t/original/i/8e0d1d38bfa76649501522dc795860ff761426c7193395a7fc9cf88988de7744/IM.png" data-mid="86185964" border="0"  src="https://freight.cargo.site/w/1000/i/8e0d1d38bfa76649501522dc795860ff761426c7193395a7fc9cf88988de7744/IM.png" /&#62;


4. Case study - Action node design

&#38;nbsp;Action nodes are elements that a user would put together in order to create conversation flows in the visual editor.&#38;nbsp;
&#38;nbsp;The main design goals were the followings.Readability : Users should be able to scan the visual editor quickly in order to get information needed to take further actions. For example, a prompt node not only has to display key information but also indicate how the node behaves, so that it becomes a readable logic flow when it is put together with other action nodes. The concept of readability would have different meanings for each role personas. Scalability : A design system for the action nodes should be scalable but in a rather specific sense. Because of the fact that 1) Composer is a ‘shell’ on Bot framework SDK which means it is fundamentally a raw code 2) Composer is meant to be customized by users to serve their own product development needs, the design system needs to be extremely simple but with strongly guided. In that way, the system wouldn’t break with complicated code expression, as well as could be used as a guideline to design your own action nodes.
&#60;img width="3117" height="1317" width_o="3117" height_o="1317" data-src="https://freight.cargo.site/t/original/i/f477219ba966ca3c53ec19b47cb57c0ea0c987598728cf904022cff43fdee421/Frame-63-2.png" data-mid="64280491" border="0"  src="https://freight.cargo.site/w/1000/i/f477219ba966ca3c53ec19b47cb57c0ea0c987598728cf904022cff43fdee421/Frame-63-2.png" /&#62;5. The current state
Bot framework Composer is still in the public beta stage, and it is being used by selected internal teams actively. We are working to meet our UX quality standards towards the official release at BUILD 20’ in May 2020.
</description>
		
	</item>
		
		
	<item>
		<title>FF Navigation</title>
				
		<link>https://sangwoohan.cargo.site/FF-Navigation</link>

		<pubDate>Thu, 08 Nov 2018 03:54:22 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/FF-Navigation</guid>

		<description>FF Navigation
Organization : Faraday Future
Time : Jan. 2017 - Sep. 2018
Project team : Sangwoo Han (Lead UX/UI designer), Isabelle Hoogland(Product manager), Yongge Hu, Chaojun Xue, Jiajun Liu, Johnson Zhang, Maulik Shah(Android developer)

&#60;img width="1400" height="1000" width_o="1400" height_o="1000" data-src="https://freight.cargo.site/t/original/i/8be93c44d6769283037cef3badd7572bef04313467d7451495b97b50cf077e27/navcar.png" data-mid="28006005" border="0"  src="https://freight.cargo.site/w/1000/i/8be93c44d6769283037cef3badd7572bef04313467d7451495b97b50cf077e27/navcar.png" /&#62;

1. Project goal
FF 91 is Faraday Future’s first production vehicle and flagship model. All-electric, autonomous-ready and seamlessly connected, it embodies the latest mobility advancements in performance, intelligence, and user experience. In this context, the in-car native navigation app became incredibly important piece of FF91’s digital product ecosystem. Although navigation app is one of the most saturated product categories that have a number of clear winners, after the initial research, the team came to the conclusion that a native navigation app, FF Navigation, equipped with specialized features would greatly improve the user experience. As a lead UX/UI designer, I led all the design related efforts of the project, from performing user research, defining design goals, designing interaction frameworks and design system, execution, and user testing.
2. User Research / Defining UX challenges
User researches were done in two phases. First, I did in-person interviews with Faraday’s employees who own electric vehicles. The questions were not limited to just usage of navigation apps; I framed the research to cover general user experience to discover vehicles’ usability problems, not just apps’. I had conducted about thrity interviews over two weeks. 
After the user interview stage, I moved onto the contextual inquiry with the same group of people. I first set up cameras in participants’ vehicles for one or two days to get candid footage of everyday driving experiences. After collecting the videos, I sat down with the candidates, watched them together, and had rather casual conversations. From these researches, I extracted following three UX challenges that I wanted to address with FF Navigation;

During drive usability&#38;nbsp;Reduce the range anxiety
Voice AI Integration

3. UX flow&#60;img width="4513" height="2920" width_o="4513" height_o="2920" data-src="https://freight.cargo.site/t/original/i/dd7d45abbe21f2e2c7fad564f3edf6508e986c13b10ba58cbdc5be81ec5442fc/Artboard.png" data-mid="28092179" border="0"  src="https://freight.cargo.site/w/1000/i/dd7d45abbe21f2e2c7fad564f3edf6508e986c13b10ba58cbdc5be81ec5442fc/Artboard.png" /&#62;On top of delivering common features like search for places, displaying place information, and guidance, the UX flow addesses some of distinctive features that are unique to FF91, such as advanced voice interaction, cross-platform connectivity, and vehicle feature integrations.
&#60;img width="7063" height="2443" width_o="7063" height_o="2443" data-src="https://freight.cargo.site/t/original/i/56d52602d02b1e9e552378cb42cc3a37126ff040d3910fdd0c31f35836104bd4/Tablet.png" data-mid="28093096" border="0"  src="https://freight.cargo.site/w/1000/i/56d52602d02b1e9e552378cb42cc3a37126ff040d3910fdd0c31f35836104bd4/Tablet.png" /&#62;Example of basic search user flow based on the diagram.
3. During drive experience – Ergonimics study
	&#60;img width="1754" height="2481" width_o="1754" height_o="2481" data-src="https://freight.cargo.site/t/original/i/527de98d9416d66197e8154dbacf033429c917ab4a69f5ccda60837132b965fd/driver_CID_vision_reachability_03062017.png" data-mid="28090120" border="0" data-scale="100" alt="Center display ergonomics study" data-caption="Center display ergonomics study" src="https://freight.cargo.site/w/1000/i/527de98d9416d66197e8154dbacf033429c917ab4a69f5ccda60837132b965fd/driver_CID_vision_reachability_03062017.png" /&#62;
	

&#38;lt; To analytically understand our users, I worked with the human factors team to observe how being a driver, and having a touch screen affect the usability. We determined various ergonimics factors for the center display,&#38;nbsp; including button sizes, font sizes, and color contrast. More importantly, we did a lot of researches on reachability and legibility of the screen. The picture on the left shows the result of them – simply put, due to the positioning of the screen and the driver, the only glancible part of the screen is the upper half of the screen, whereas the lower part is the interaction-friendly area. Based on the research, I had defined the design frameworks that was going to be the building blocks.



	




	&#38;gt;
The right is the final screen demonstrating key design principles I defined based on the researches.

Placed visual information on the upper side, while text information is located on the botton side. In order to systematically deliver this idea, I strictly designed every screen in a combination of map + a bottom-positioned floating card. The map view is responsible for map markers and route graphics, whereas the card contains mostly detailed texts.There is only one emphasized primary button. Since it is challenging to build a muscle memory with touch screen, the driver has to scan the center display everytime she or he interacts with it. Also, because most of the interactions are focused on the bottom side, I wanted to create an extreme visual hierachy to improve scannability.Another design challenge was to position key interactions on the driver side, so that the driver doesn’t need to stretch the arm which can be extremely distractive.
	&#60;img width="1200" height="1920" width_o="1200" height_o="1920" data-src="https://freight.cargo.site/t/original/i/7f3519ba25402bbe496bac83faa7a42e49d0d557fa648941a2201ac2fd54a6c4/d-trip_planning-day-4-routing_default.png" data-mid="28093256" border="0"  src="https://freight.cargo.site/w/1000/i/7f3519ba25402bbe496bac83faa7a42e49d0d557fa648941a2201ac2fd54a6c4/d-trip_planning-day-4-routing_default.png" /&#62;



4. Reducing range anxiety
One of the biggest part of eletric vehicle drivers’ lives is concerning about battery level. This is also why Tesla drivers were strongly satisfied with their navtive navigation app from our user research – because the app automatically maps charging stations along the route. My challenge was to design a competitive system with our limited access to database, and discovering&#38;nbsp; opportunities to even more aggressively reduce range anxiety.
- Advanced Trip Planning
 ‘Trip planning’ was strategically emphasized as a major feature. It allows the user to plan a trip ahead of navigation. The user flow below describes a normal trip planning feature.
&#60;img width="5246" height="2231" width_o="5246" height_o="2231" data-src="https://freight.cargo.site/t/original/i/6fa94a9b4f9b4717b6181e99fbf6c5a64f278251ab2688b66de7c7557e414458/Artboard.png" data-mid="28126096" border="0"  src="https://freight.cargo.site/w/1000/i/6fa94a9b4f9b4717b6181e99fbf6c5a64f278251ab2688b66de7c7557e414458/Artboard.png" /&#62;
However, if&#38;nbsp; the estimated battery level goes below zero, a new ‘Charge Required’ button appears. This button initiates a sub flow that searches charging stations that can provide enough charging to reach the destination.

&#60;img width="5292" height="2231" width_o="5292" height_o="2231" data-src="https://freight.cargo.site/t/original/i/8c57b54291611e1be5a55a506ed665e749f336d795d3c151c5f87471d63e5981/Artboard.png" data-mid="28126129" border="0"  src="https://freight.cargo.site/w/1000/i/8c57b54291611e1be5a55a506ed665e749f336d795d3c151c5f87471d63e5981/Artboard.png" /&#62;- Insufficient Battery AlertIn addition, if FF Navigation figures the current battery level is not sufficient to reach the destination during guidance, it pushes an system alert with an option to search for charging stations.

&#60;img width="5292" height="2231" width_o="5292" height_o="2231" data-src="https://freight.cargo.site/t/original/i/2368733a3d6036aed82df2268865c8d481e1ff685845b476d9c1a7ba16dd7934/Artboard.png" data-mid="28126904" border="0"  src="https://freight.cargo.site/w/1000/i/2368733a3d6036aed82df2268865c8d481e1ff685845b476d9c1a7ba16dd7934/Artboard.png" /&#62;


5. Voice AI Integration
FF91 is an AI-first vehicle, therefore it was important that FF Navigation’s design would serve as a part of the bigger AI UX architecture – FFAI. On top of basic searching and navigating features, FFAI provides conversational interaction to help the driver access the system without having to interact with the touchscreen.&#38;nbsp;

&#60;img width="1751" height="648" width_o="1751" height_o="648" data-src="https://freight.cargo.site/t/original/i/3130644c8b1df0b26d102b9ca95e5251a61670f6494423ac5ef068cf763873cc/Artboard.png" data-mid="35167616" border="0"  src="https://freight.cargo.site/w/1000/i/3130644c8b1df0b26d102b9ca95e5251a61670f6494423ac5ef068cf763873cc/Artboard.png" /&#62;
6. Test &#38;amp; Validation
&#60;img width="2880" height="1800" width_o="2880" height_o="1800" data-src="https://freight.cargo.site/t/original/i/79cddd1c3795ecd6db53f6ad10bb0828694e91c395e90e2f19d4f8e05214b7e4/Screen-Shot-2018-12-13-at-12.39.04-PM.png" data-mid="30586867" border="0"  src="https://freight.cargo.site/w/1000/i/79cddd1c3795ecd6db53f6ad10bb0828694e91c395e90e2f19d4f8e05214b7e4/Screen-Shot-2018-12-13-at-12.39.04-PM.png" /&#62;
Due to the nature of unreleased product, a&#38;nbsp;strategic approach for user testing and validation&#38;nbsp;strategically. I structured the testing session in 3 different stages.
1. Low-fidelity paper prototyping : As the most frequently executed testing method, paper prototyping was used to validate interaction design ideas quick and fast. The cadidates are mostly teammates.

2. Mid-fidelity Framer prototyping : Framer is my go-to prototyping tool. It’s fast and rigorous. To test a certain section of user flow, I aggressively used Framer prototypes and testing car(picture above). We conducted this user testing every 4-5 weeks to get user data to design on.

3. High-fidelity Android prototyping : For holistic user testing, I used beta builds from developers. The prototype was installed and set up in our testing vehicles, and candidates drove the vehicles for a couple of days. With this approach, I managed to get the bigger picture on the impact of UX designs on the product and user experience.


</description>
		
	</item>
		
		
	<item>
		<title>Samsung Gear S2</title>
				
		<link>https://sangwoohan.cargo.site/Samsung-Gear-S2</link>

		<pubDate>Sun, 11 Nov 2018 22:51:35 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Samsung-Gear-S2</guid>

		<description>Samsung Gear S2
Organization : Samsung Research America – Think Tank Team

Date : Jun. 2014 - Feb. 2015
Team : Sangwoo Han (UX/Interaction designer),&#38;nbsp;Sajid Sadi(Project manager), Link Huang (UI motion designer), Eva-Maria Offenberg (UI designer), Cathy Kim(UI Designer), 
Curt Aumiller(Lead Industrial designer), Jiawei Zhang(Industrial designer), Chengyuan Wei(Industrial designer)

&#60;img width="1620" height="1080" width_o="1620" height_o="1080" data-src="https://freight.cargo.site/t/original/i/94f722ebcf41724dadd9e6944ebf0d1ca9a5d1da80ad7281b764dfe96065e70e/07.jpg" data-mid="28273447" border="0"  src="https://freight.cargo.site/w/1000/i/94f722ebcf41724dadd9e6944ebf0d1ca9a5d1da80ad7281b764dfe96065e70e/07.jpg" /&#62;
1. Project Goals
The Samsung Gear S2 comes in a versatile, circular design with an intuitive, custom UX and advanced features that enable users to enhance, personalize and bring more fun to their mobile experience.
 
The goal of the project was to create a device that delivers right information at the right time with strong UX design leveraging its unique hardware.&#38;nbsp;The Gear product family was struggling from the lack of compelling product vision and strong UX design. The ultimate goal of the Think Tank Team’s design group was to develop a production-ready prototypes and detailed documents that captures our product vision, design frameworks, design strategy, product requirements, and product features.

As the only UX / interaction designer in the project, my role was to conduct user research, design the entire core OS framework and interaction models, and create UI design system tookit.
2. User Research










Since the previous Gear products had a small user pool in US, I had to reach out to a user group for user research. The sessions consisted of in-depth interviews and contextual inquiries. After a series of online sessions, we could come out with a few key discoveries.
People who are in circumstances where their physical access to smart phones is limited found Gear to be a necessary device.

People who don’t prefer to use smartphone frequently enjoyed using Gear.

Overall, owning Gear dramatically decreased frequency of engaging smart phones. Which means majority of the tasks, such as reading messages, emails, getting notifications, that people perform with their smartphone can be done with much smaller, simpler devices.
Most frequently used Gear features are notifications, messages, weather, and pedometer.
There’s inevitable gap in information flow from Gear to a smart phone.



From the user research, we decided that the Gear S2 must be a general-purposed device delivers the right information and features at the right time, without hassle of using bigger smart devices. Furthermore, the smart watch had to collect user data through sensors to provide more personalized experience.
3. Interaction design





The Gear S2 had three distinctive hardware constaints determined before I joined the project that defined the interaction design direction.

It’s a watch
It has the world’s first fully circular display
The bezel around the screen rotatesMy job as an interaction designer is to come up with design frameworks, information architecture, interaction models, and UI systems to deliver most intuitive user experience.

3-1. Case Study 1 :&#38;nbsp; Rich Notification / Widget system
The previous Gear devices had traditional app-launcher OS structure which is a direct adaptation from smartphone OS design. It was a familiar design that didn’t require any learning curve, but was failing to provide user experience that suits its medium. I designed the new OS launcher with more rigorous UX with simpler interaction patterns, and clear mental model.








&#60;img width="2025" height="809" width_o="2025" height_o="809" data-src="https://freight.cargo.site/t/original/i/0832a969bb7833165a915094d17e4a91837054d2d80e33e62a7209edd71594fd/s2.png" data-mid="28276938" border="0"  src="https://freight.cargo.site/w/1000/i/0832a969bb7833165a915094d17e4a91837054d2d80e33e62a7209edd71594fd/s2.png" /&#62;
The new Rich Notification / Widget OS design was designed around 3 core UX.&#38;nbsp;

Simpler interaction model : Leveraging the bezel rotation and context-sensitive design approach, dynamic information such as new messages, new emails, social media notifications are always one bezel-tick away.

Clear mental model : A simple mental model was extremely important to build compact, scalable, and sustainable design system. The new mental model was designed to plant a simple idea in user’s mind; turn the bezel left for notifications, right for widgets. The users never get lost in this system even without any indicator.

Intuitive userflow : By enhancing functionalities of notifications and widgets, users can take care of thier most frequent tasks such as reading or replying to messages on S2 immediately after they confirmed them. If they need an access to more advanced features, users can jump right into the mobile app from its Rich notification or apps on S2. 
The design went through numerous iterations with rapid prototyping, and was tested and validated by user testings.

3-2. Case study 2 : Interaction models



Gear S2’s bezel rotation was a powerful hardware interaction idea, but it needed proper interaction designs to become the center piece of the product. Working with the team’s mechanical engineer, we focused on mainly two aspects – the number of rotation steps and the feel of it. We’ve prototyped and tested various designs and materials, and measured the friction patterns to finalize the bezel design. 
The final candidates were two – a mechanism using springs, and another design using magnets. The magnet was considered to be a better design from internal testings, but eventually the spring was chosen due to manufacturing issues. The number of detent was defined as 60.
With the hardware interaction design determined, I worked on the software interaction models that can synergize the rotating bezel. Since the it was a completely new interaction, this process involved creation of new consistent mental model and visual indicators. 

4. Outcome











Gear S2 had launched October 2015, and became the second most sold smart watch in the world of the year. The product received positive press reviews, especially on the UX design.
A clinic on what a smartwatch should be - The Verge“The S2’s new user interface perfectly complements the rotating bezel, as well. It’s fast and easy to understand, unlike the byzantine menus on the Apple Watch or Android Wear. Turn the ring to the right from the watchface and you access widgets of information; turn it to the left to see your notifications. Notifications can be filtered and are interactive, even with third-party apps, just like on Android Wear.”
A real rival to Apple Watch and Android Wear - Tech Radar“The Gear S2 takes the best of each OS, and combines them to create the best UI we've seen on a smartwatch.”


The UX design I delivered has been carried over to the latest generation of the product line.





 

</description>
		
	</item>
		
		
	<item>
		<title>Garden</title>
				
		<link>https://sangwoohan.cargo.site/Garden</link>

		<pubDate>Tue, 13 Nov 2018 05:11:52 +0000</pubDate>

		<dc:creator>Sangwoo Han</dc:creator>

		<guid isPermaLink="true">https://sangwoohan.cargo.site/Garden</guid>

		<description>Garden &#38;nbsp;&#38;nbsp;
Organization : Samsung Research America - Think Tank Team
Date : May. 2015 - Sep.2015
Team : Sangwoo Han(Project lead, UX/UI designer, Game designer, Visual artist, Android / Unity prototyper), Ruokan He(Visual artist, 3D modeler, Unity prototyper)












&#60;img width="1366" height="768" width_o="1366" height_o="768" data-src="https://freight.cargo.site/t/original/i/5f4fe8d3f8970b6c070ffc22bdf37258979d3627badec6767e677b3a71fc25a2/gardencover-20.png" data-mid="28552140" border="0"  src="https://freight.cargo.site/w/1000/i/5f4fe8d3f8970b6c070ffc22bdf37258979d3627badec6767e677b3a71fc25a2/gardencover-20.png" /&#62;
1. Project goal
Garden is a new health app that provides a new way to promote caring one’s health. 


While traditional health apps achieved their success through gamification by adopting video game’s competitive 

nature, I came into this project with a different vision that could appeal for more general audience. In that sense,

I paid careful attention to the most successful video game ever existed – Pokemon.


Virtual pet simulators such as Pokemon and Tamagochi attract users with different kind of fun than other competitive video games. Rather than using compete-win-reward model, Virtual pet simulators make players get engaged with virtual pets with nurture-collect-reward model. Once they get hooked, people deeply care about their virtual pets, and happily spend their emotions and material resources as if they are real life animals. The beauty of this model I found is that the process of growing and collecting virtual pets becomes reward itself, which provides a strong motivation to keep engaging with the game. 


From this design vision, the goal of Garden was to adopt this model to motivate users caring more about their everyday health activities. The goal of the project was to build a fully working prototype that has a quality that’s good enough to test and validate in order to pitch the project to Samsung HQ. Myself and our design intern formed a small project group to achive it.


2. UX of Garden
&#60;img width="768" height="687" width_o="768" height_o="687" data-src="https://freight.cargo.site/t/original/i/95b02326f9dfcff9564b64ca77e8538f37078877bbc2b46547843f1d210b9fcd/garden-screens-01.png" data-mid="28552687" border="0"  src="https://freight.cargo.site/w/768/i/95b02326f9dfcff9564b64ca77e8538f37078877bbc2b46547843f1d210b9fcd/garden-screens-01.png" /&#62;













Garden app consists of two views – Garden view and Root view. 


Garden view is where the flower grows. It is the core part in the world of Garden where the lives exist, interact with users, and are nurtured. The flowers directly react to touch and shake in order to make themselves feel more tangible.


Root view is accessible by swiping up from Garden view, and hosts all the health data and measurement features that take to grow the flowers. The root consists of smaller roots, and each small root shows different kind of health data. Once the user tap one of the small roots, that opens a full graphic view of the co-related data, as well as access point to start the new measurement. 



3. Game Design&#60;img width="1065" height="265" width_o="1065" height_o="265" data-src="https://freight.cargo.site/t/original/i/b00a25f17173f824af48b6701fa14168777e8c872408e97959170609046052dd/flowers-02.png" data-mid="28553147" border="0"  src="https://freight.cargo.site/w/1000/i/b00a25f17173f824af48b6701fa14168777e8c872408e97959170609046052dd/flowers-02.png" /&#62;


Garden measures user’s health activities like normal health apps. It tracks your steps, run, water intake, mood, sleep and heartbeat.
When you contribute your data, a flower earns growth points. As the flower gain the points it grows, when it’s fully&#38;nbsp;
grown, the flower gets moved to Garden and the user gets a new seed.
Simpler activities such as mood input and water intake earn less point then running or sleeping.
The game is designed to reward more consistent activity cycle in order to help people build healthier measurement routine. For example, users can still earn points by measuring heartbeats multiple times in a row, but the point reward will eventually become zero. If you measure heartbeats same time every day, users will be rewarded with bonus points.

4. Outcome



The goal of the project was to pitch the design idea to HQ’s health team, and fully working, high fidelity prototype was necessary in order to achieve it. The prototype was created using Android and Unity. Garden view was written in Unity which was embedded in Root view that was coded in Android. The app was connected with Samsung Galaxy’s pre-loaded health app through api to sync the health data collected on Garden. Since I had little experience in 3D modeling, Ruokan helped me on it with Maya. 
The prototype was tested internally with a total of 18 participants for a month. The feedbacks were positive. Although approximately a half of the group stopped using the app after 2 weeks, but the rest of candidates were heavily engaged with the experience throughout the entire time. I and S-health team saw there was a potential in this project.

Therefore I delivered the prototype with the results to HQ after going through few iterations. Since then, the project had been approved to move forward on being integrated into an S-Health’s basic feature. 











</description>
		
	</item>
		
	</channel>
</rss>