I joined Cruise in March 2019, drawn to its technology and mission to create safer and cleaner transportation. During my 3-year tenure, I touched a variety of internal tools from Semantic Mapping to Simulation. I spent most of my time as the design lead for Ground Truth.
Ground Truth work
Ground Truth is a large organization within Cruise. It partners with ML teams to scope and produce the datasets necessary to development of our Perception and Prediction models.
As the only designer on the team, my role was to architect, design, and optimize the tools enabling the machine learning loop:
The ML loop. Ground Truth’ areas of responsibility are shown in black.
In just 2 years on Ground Truth, I standardized and integrated 5 pipelines to synchronize the labeling output of our different sensors. I architected an intuitive and scalable labeling workspace and improved labeling speed through better UX, smarter interactions, and automation. I designed innovative labeling features, including temporal labeling. I designed the data mining platform from the ground up, to allow data scientists to find relevant scenes for labeling. I also defined the design system for the platform and its different labeling taxonomies.
My Ground Truth projects slashed costs in half while making labeling 10x more efficient. Below are a few highlights.
01: Object model and interactions
When I joined Ground Truth, the labeling experience was rudimentary and hacky. Labeled objects were poorly defined and did not have states. It was difficult for users to tell which object was selected and this made the work challenging and error-prone. New labelers had to undergo 3 weeks of intensive training to simply begin to be operational. The lack of definition of what “an object” was also caused problems on the engineering side, limiting our ability to scale and innovate.
I led an initiative to completely rethink our object model and related interactions.
The main proposal was to separate the concepts of objects and artifacts, where an object is the real-world item being classified (car, pedestrian, street sign, etc.) and the labeled footprint of that object in different sensor spaces are its artifacts (lidar boxes, image boxes, etc).
This is how it ended up looking like in the product:
Object list (left) and object details (right) showing multiple artifact types across frames
I separated the object details from the object list in order to expose available object metadata such as its class, the type and location of its artifacts, and any linked objects (a feature I later worked on).
I designed an Artifacts module that tracks an object’s footprint across sensors and over the duration of the scene. It also acts as a shortcut to navigate to each artifact.
In parallel, I did some visual work to make artifacts more interactive. I added hover and selection states and used color to highlight different object classes (helpful when you have a crowded scene). I also created an icon for each class in the taxonomy.
Class colors and icons for our Universal Taxonomy.
Artifact types and their respective states
Key results: The new design allowed Cruise to automatically link several artifacts to the same object and integrate different labeling pipelines. It reduced labeler training time from several weeks to a few days, and sped up labeling by an estimated 4x. Additionally, it improved dataset quality by a few percentage points.
02: Smart interpolation of lidar boxes
In the early days of Ground Truth, labelers had to manually create lidar boxes on each frame in a scene. This process was incredibly-time consuming and resulted in jittery transitions from one frame to the other.
After watching labelers work, and attempting to label a few scenes myself, I proposed a solution where any box created or manually edited by labelers is automatically turned into a keyframe and is either linearly interpolated to the nearest keyframes or propagated to the rest of the scene. This last bit was particularly helpful for static objects such as parked cars that only require one keyframe.
I worked with a PM to make sure the solution was robust to edge cases and we created a series of supporting hotkeys to further speed up labeling.
The key for this project was to clearly highlight keyframes on the timeline. Unfortunately, the old timeline was rudimentary and did not isolate frames:
The old Ground Truth timeline was rudimentary… brutalist even?
The timeline had to be entirely redesigned. I completed a series of UI explorations to enable support for different type of frames:
Option E was selected for development:
In the final build, we color-coded frames as follows:
Key results: The new timeline and interpolation method (including new UI and hotkeys) sped up lidar box labeling by 10x and made our lidar labeling tool “a lot more fun to use” according to several labelers’ feedback. It also set the foundation for temporal labeling (see below) and automated pre-labeling, which used Cruise’s perception model to pre-label scenes before human review.
03: Temporal labeling
Cruise wanted to improve its cars’ ability to analyze vehicle signals such as brakes and blinkers. The Perception team reached out to Ground Truth with an ambitious request to label tens of thousands of vehicle signals according to a complex new taxonomy.
At the time, Cruise’s tooling to label this type of data (which we called “time-varying attributes” or TVAs) was rudimentary. The bare-bone UX and lack of visualization made it slow and error prone. It would have taken months to build a mediocre dataset.
I worked with cross-functional teams to clarify the project’s scope & requirements, refine the proposed taxonomy and devise an intuitive and scalable UI. I created multiple prototypes ranging from basic MVP to more advanced visual solutions to drive the conversation, build consensus and decide how to phase the implementation.
The selected UI is a visual timeline with an input method relying on forward propagation. It uses an innovative hotkey system to further speed up the labeling process. It is not only easy to learn but also incredibly efficient and generic enough to support any future projects requiring TVA labeling.
The multi-track timeline with TVA input fields in the object details panel (on the left)
Key results: The new design delivered over 80% reduction in temporal labeling time (55.54 minutes to 10.51 minutes) and helped Ground Truth meet all of its dataset delivery milestones.
“It was a nightmare without the timeline. The feature helps my colleagues and I check the labels with ease.”
Perception Engineer
“Prior to the timeline, labelers were not looking forward to labeling more vehicle signals, post timeline they are enjoying it.”
Ops partner
More than half of Ground Truth projects now involve TVA labeling. This labeling method has dramatically accelerated the creation and improvement of perception models such as vehicle signals, wheel orientation, traffic lights, car door, etc.
Simulation work
After Ground Truth, I spent 6 months as the only designer on the Simulation org. I worked on a variety of projects across teams. One of these was a telemetry dashboard that I designed in collaboration with Simulation leads. The dashboard was meant to help executives assess thew health of Simulation efforts. It shipped shortly after I left the company.
Here’s a teaser of the UI.
I designed all the data viz, chart interactions and filters.
Design systems work
Throughout my tenure at Cruise, I regularly contributed to the company’s design system, completely reworking our color palette to make it more accessible and cohesive, adding new components (pills, icons, menus, hotkey indicators) and improving existing components (single and multi-select fields, inputs, buttons). I also regularly led or attended our design system meetings, providing feedback on other designers’ proposals.
Here’s an example of a component I added:
Figma variants for the pill component
Figma documentation to ensure consistent use of the component across the team.
I can share additional projects upon request.
Email me at cecile0112358@gmail.com