It began in 2016. A project manager at HERE, the excellent Dan White, approached me with an ambitious vision.
“I need you for a new project. I want to pitch building a platform—something like AWS, but for location-based services. I want us to show what that could look like.”
HERE Technologies, formed from a collection of recently acquired startups, needed a central platform for its location data services. This sparked a three-year journey that began with sketches in a PowerPoint and culminated in the creation of the HERE Open Location Platform (OLP). During this exciting time I also played a leading role in designing and launching the HERE Data Processing Pipelines product within OLP suite.
My role
Design direction and high level vision for the Open Location Platform. For the Data Processing Pipelines within OLP, I led the UX, interaction, and visual design — crafting user flows, wireframes, and testable prototypes, all the way to pixel-perfect mockups for launch. While I conducted a fair amount of user research independently, I also frequently collaborated with the UX research team. I facilitated workshops and brainstorming sessions with cross-functional teams and helped lead regular design scrums and periodic critique sessions with the OLP design team.
Tools
Sketch, inVision, Illustrator, Photoshop, PowerPoint, Miro, Jira, Confluence, Trello, Otter, WebEx, HipChat
The problem
In 2016 HERE was a collection of recently acquired companies — an amazing portfolio of capabilities but with no coherent experience for our customers
HERE's initial challenge seemed to be about eliminating redundant services. But as we did early discovery, it became more about unlocking synergies between the individual products. Location-based models thrive on large, high-quality datasets, precisely what data marketplaces could provide. By combining services into a unified platform, we could create a powerful ecosystem. Products would complement each other, fostering growth instead of remaining isolated and stagnant.
Step one
Crafting a vision
Before the HERE branding team stepped in, our project had a working title: the HERE Analytics Suite. Our goal was to convince the company of its potential and secure resources from the HERE executives. We envisioned this suite evolving into the HERE Open Location Platform we know today.
Convinced this was the future, HERE's executive team was all-in. Over 400 engineering resources and a sizable product and design team were dedicated to turning the idea into reality.
Process: from research to product
Most user-centered design processes mirror each other. We did a combination of Lean UX and classic design thinking.
Assumptions workshop
We already had a lot of data to work from, as most of our initial customers were internal, but we were eager to expand our user research efforts
Internal customer expectations can differ wildly from our those of our external customers. We needed to validate that both our internal and external personas were accurate. As well, we needed to gather more data and validate user pain points and jobs to be done.
We hypothesized three potential persona categories
Primary
Data engineers
Secondary
Data scientists
Tertiary
Other team roles (TBD)
Seemingly straightforward assumptions can carry significant risks
Assumption #1
Our primary users are data engineers who build production-ready data pipelines from ETL processes. These pipelines process incoming location data, perform transformations based on the ETL design (likely created by data scientists on their team), and deliver the final results to a repository accessible by location-based service APIs and applications.
Assumption #2
Our primary users prefer to use APIs and command-line over a web-based UI.
Assumption #3
Our primary users regularly work in Java.
Assumption #4
Our users’ primary use-case involves batch processing using the Apache Spark data processing framework.
Potential red flags in this process 🚩🚩
A fairly important group of primary target users, data engineers at BMW and Audi, were inaccessible for direct UX research. Both companies belong to the consortium that owns HERE Technologies. We were informed that the volatile relationship between these engineering groups and HERE necessitated this restriction. HERE's management, concerned about jeopardizing this already tenuous business partnership, worried that UX research could inadvertently worsen the situation. Potential missteps, such as saying the wrong thing or creating unrealistic expectations of immediate product changes through engineer interviews, were deemed risks. Instead, HERE relied on its sales and product teams to translate customer complaints from BMW and Audi into prioritized roadmap items. We flagged the potential limitations of our research early in the process. Specifically, the lack of direct access to key user groups could hinder our ability to accurately translate their needs into successful outcomes.
Validating assumptions
Themes start to materialize in dozens of interviews with internal customers
We wanted to confirm our initial assumptions by verifying the key user groups (personas), their specific challenges and pain points, along with their jobs-to-be-done and goals.
User feedback
Some of our initial assumptions were confirmed, but there were also themes that surprised us
The reason you talk to your customers is that many of their concerns turn out different than your early stakeholder assumptions.
Emerging themes
A (somewhat sizable) pivot: some streaming pipelines were already in production
Users wanted options for both batch and streaming data processing pipelines. This surprised us. During our early discovery interviews we had heard from a number of internal customers that streaming pipelines would be needed at some point, but nobody had streaming pipelines as a requirement. As we had more conversation and heard from a few of our external customers, we started hearing stories about streaming processes that were ramping up to go to production.
Other pivotal insights
Data teams had a need for a web-based UI
While our primary user persona primarily utilizes APIs and the command line for their core data engineering tasks, they also find web interfaces valuable for several purposes. Learning: Web interfaces provide a user-friendly environment for new users to grasp functionalities and existing users to explore data or configurations. Collaboration and presentation: The visual nature of web interfaces makes them suitable for showcasing data transformations or pipeline designs to colleagues who may not be as comfortable with code. Task delegation: Web interfaces can be leveraged to offload specific, less technical tasks to less experienced team members, streamlining workflows and fostering collaboration.
Data teams wanted to use other languages besides Java
Customers wanted a wider array of languages than just Java—like Scala, and especially Python. Java was still the top request, but we were surprised to hear there was such diversity in the desire to translate so many styles of working into production pipelines. We even heard a number of users ask if they could take regular SQL queries and turn them into pipelines.
Data engineers wanted guidance around “tuning” pipelines
One surprise was around “tuning” the data processing pipelines—optimizing data processes so that jobs got done in the appropriate time frame and for the lowest possible cost in that time frame. We had a lot of early interviews where users talked about the need to control the tuning of data processes themselves, but more and more people we spoke to, the more we found that admitted it was a job they’d happily get rid of.
Verified personas
Defining personas based on overlapping needs
Our assumptions about the persona were validated, but we discovered surprising nuances in the way they interact with each other, and the swift pace of change in their process driven by new technologies.
Our primary persona’s journey
A user journey map to give the whole team a shared understanding of Gordon’s experience
Journey maps bridge the gap for our product and dev teams, aligning everyone on Gordon’s experience. Journey maps also highlight key opportunities where we can best support his needs.
Feature ideation and prioritization
Brainstorming user stories and solutions - that lead to positive outcomes for our users
After analyzing a wealth of user feedback, we identified our personas' pain points, needs, data inquiries, core jobs-to-be-done, and desired outcomes. Condensing these insights into user stories fueled several days of productive discussions within the product team. These collaborative sessions played a critical role in shaping a well-organized and prioritized product roadmap.
V1 release
Deploying a V1 release to satisfy customer requirements
Leveraging our prioritized product roadmap we charted our go-to-market strategy. However, before tackling the extensive user needs we identified, we needed to prioritize a critical first step: delivering a baseline data processing pipeline product that fulfilled the requirements of our key internal and external customers. We successfully met this objective, ensuring a swift product launch.
OLP Pipelines V1 Web-UI
Data processing pipelines list page
The V1 release prioritized getting our customers operational quickly. To ensure a seamless user experience we meticulously aligned the web UI's terminology and workflows with the API. This task proved to be more challenging than we had originally anticipated.
OLP Pipelines V1 Web-UI
Data processing pipelines details
OLP Pipelines V1 Web-UI
Data processing pipelines configuration
OLP Pipelines V1 Web-UI
Data processing pipelines developer guide
Our user research emphasized the need for a robust and user-friendly developer guide. A high-priority item on our roadmap involved ensuring that the quality of the supporting visuals within the documentation matched the high standards of the user interface. However, by the time this specific task was prioritized I had transitioned to a new project (Plunk). I had the opportunity to influence the specifications and early designs, but I did not get to implement those designs.
V2 releases
Creating positive outcomes with a V2 releases to satisfy customer feedback
Following the successful launch of V1, and the influx of valuable customer feedback, our product team faced a strategic challenge: balancing high-priority roadmap items with the numerous customer requests. We embraced the opportunity to identify high-value but achievable user stories, then started to iterate, test, and release.
Problem #1
Gordon and Elena need more visibility into pipeline performance
Elena, our product manager, wants regular performance updates, and Gordon, our primary persona data engineer, has more important things to do than generate reports. As well, debugging is such a large part of Gordon’s process, and in V1, logging had no place in the UI.
V2 Web-UI feature #1
An updated pipelines details UI
The pipeline detail page now features a comprehensive log section. This section displays a chronologically sorted list of all operations and jobs along with relevant metadata for each entry. Additionally, the logging level can be adjusted to control the amount of detail displayed.
Pipeline detail page: metadata + logging levels tab
Pipeline detail page: operations tab
Pipeline detail page: jobs list tab
Problem #2
Gordon needs more control over cluster parameters
Customers with intricate data processing workflows, involving multiple interconnected pipelines, face a constant struggle in balancing performance and cost. For Gordon, this translates to a critical need for fine-tuning the performance of the pipelines he deploys and maintains.
V2 Web-UI feature #2
Cluster configuration
We created a UI that allows our users to fine-tune the cluster configuration for both batch and streaming data processing jobs.
Pipeline detail page: metadata + logging levels tab
Pipeline detail page: metadata + logging levels tab
Post mortem
What did we learn?
Our many customer conversations served as a valuable reality check to our initial assumptions. Here are some key takeaways for my team, and some of my personal thoughts:
Customer needs are diverse
A common oversight in UX is to treat personas as monolithic groups. But even within a well-defined user group, like the specialized engineers in this project, significant variations in work styles and preferences quickly dispel that misconception. The challenge of accommodating such a diverse range of needs within our user base became a balancing act.
Not speaking directly with your customers can lead to products that miss the mark
I empathize with the difficult situation our management team faced in dealing with our most important customers, a group of engineers at BMW and Audi who, if we’re being honest, didn’t really want to work with us. However, relying solely on a list of requirements presented a limitation. Ideally, a deeper understanding of their goals, needs, and pain points would have allowed us to explore solutions that addressed their specific challenges and create more positive outcomes. I’d love to tell a story about how UX and product rose above this challenge and delighted our customers, but our only opportunity for engagement was a single four-hour phone meeting where our participation was restricted (muted and unable to speak). The entire first hour of the meeting, the engineers angrily complained about our product. We had let them down. While this experience wasn't ideal for product development, it did serve as a valuable lesson in the importance of open communication and user-centered design.
Don’t skip contextual interviews
I’m a big fan of sitting with users at their desks and watching them work. However, because of the logistics of the HERE Berlin office, where many of our internal customers work, a lot of our early interviews were in conference rooms. In those early interview sessions, the data engineers we spoke with primarily focused on the development aspects of their workflow, encompassing ‘research’, ‘coding’, and ‘deployment’ phases. The larger significance of maintaining operational data pipelines within their daily tasks only became apparent later, when I was finally able to conduct contextual interviews, observing engineers directly at their desks. This experience highlights a valuable lesson: we can’t rely solely on users to articulate their entire workflow. Contextual interviews, where users are observed in their natural environment, are a great method for uncovering these valuable details.