Design Partners in Engineering Decisions: When and How to Include Them
When you’re building software to analyse data from a rover 200 million kilometres away, you don’t get many chances to iterate. The feedback loop is measured in light-minutes, not sprint cycles. But the real lesson I learned working on PIXL at NASA’s Jet Propulsion Laboratory wasn’t about the constraints of interplanetary communication. It was about what happens when you treat design as foundational rather than decorative.
The scientists using Pixlise, the software we built to analyse X-ray spectrometry data from Mars, went from taking roughly a year to publish papers to doing it in weeks. That’s not because we wrote better code. It’s because we understood how they actually worked before we wrote any code at all.
Most engineering leaders I talk to don’t dispute that design matters. They just think it can wait. Ship first, polish later. Get something in front of users and iterate. The intention is good. Stay lean, move fast, avoid over-engineering. But there’s a hidden assumption in that approach: that design is about polish. That it’s the paint you apply after the structure is built.
It isn’t. And that misunderstanding costs more than most teams realise.
The False Economy of “We’ll Figure Out the UX Later”
The pressure to ship is real. I run a consultancy and I see it constantly. Stakeholders who want a working prototype by next month. Investors who want to see progress. Clients who’ve already announced launch dates. Stopping to do user research feels like a luxury when the sprint board is overflowing.
So what happens? Engineers become designers by default. We sketch out interfaces based on how we think the data should flow. We make assumptions about what users want because asking them would take too long. We ship something that works, in the technical sense, and wait for feedback.
The feedback arrives. It’s not what we expected.
BCG research shows that nearly half of C-suite executives report that more than 30% of their tech projects run over budget and late. The reasons they cite include misalignment between technical and business teams, unrealistic timelines, and insufficient resources. But underneath those symptoms there’s a quieter problem. Teams building software that solves the wrong problem. Or solves the right problem in a way that doesn’t fit how users actually work.
Barry Boehm calculated in the 1980s that fixing a defect in production costs roughly 100 times more than catching it during design. A study at Ricoh put it more starkly: the cost of fixing a design defect was $35 during design and $690,000 in field service. These numbers vary by context but the pattern is consistent. The later you discover a mismatch between what you built and what users need, the more expensive it becomes to fix.
And yet we keep placing the same bet. That we’ll be able to course-correct after launch. Sometimes we’re right. Often we’re not. And the lurking usability problems become emergencies that consume the engineering team’s roadmap for months.
What “Design” Actually Means Here
When I say design should be included early, I’m not talking about colour palettes or button placement. I’m talking about a specific kind of work that engineers often don’t have time or training to do well.
User research is the practice of understanding who will actually use this software, what they’re trying to accomplish, and how they currently get that work done. It’s not about asking people what features they want. It’s about observing and interviewing to understand the shape of the problem.
Interaction design is figuring out how users will accomplish their goals through the software. What’s the flow? What happens when things go wrong? Where are the decision points and how do we support them?
Design systems are shared vocabularies. Components, patterns, and conventions that both designers and engineers can reference. They reduce ambiguity and make it easier to build consistently.
Service design zooms out further. It’s about the processes people use, not just the screens they interact with. How does this software fit into their broader workflow?
At JPL, the Human Centered Design group takes this seriously. As one of their senior designers put it, “By designing our processes around the needs of the people who use them, we not only save the taxpayers money by making people more efficient, we maximise the science we can get out of each mission.” That’s the frame. Design as leverage for outcomes, not decoration.
What This Looks Like in Practice: The PIXL Story
PIXL, the Planetary Instrument for X-ray Lithochemistry, is an X-ray spectrometer mounted on the Perseverance rover. It analyses the chemical composition of Martian rocks at a level of detail that helps scientists understand the planet’s geological history and search for signs of ancient microbial life.
The instrument generates enormous amounts of data. Before Pixlise, the scientists analysing that data relied on Excel-based pipelines they’d built over years. Those pipelines worked. Scientists are resourceful. But they didn’t enable collaboration. Each researcher had their own spreadsheets, their own methods, their own version of the truth.
We built Pixlise to change that. It’s an open-source web application that allows scientists and researchers to analyse PIXL data in near real-time, collaboratively, while retaining the flexibility of their existing workflows. They can share insights, build on each other’s work, and move from raw data to published findings faster than ever before.
But here’s the thing. That outcome wasn’t inevitable. It happened because we had a dedicated UX research function embedded in the project from the start.
The researchers didn’t just sketch interfaces. They aggressively interviewed different members of the user base. Scientists, researchers, anyone with an interest in analysing rover data. They watched how people worked. They asked questions that engineers wouldn’t think to ask, because engineers are focused on what’s technically possible, not on what matches how scientists actually think.
What emerged from that research shaped the product in ways that wouldn’t have happened otherwise.
The charts had to make sense to users, not just to engineers. Technical accuracy wasn’t enough. The visualisations needed to match the mental models that planetary scientists already had. The way they think about spectrometry data, the comparisons they instinctively want to make.
The interaction between visual data and analysis tools had to feel natural. Scientists are looking at images from Mars alongside spectral data. The way those two things connect in the interface matters enormously. Get it wrong and you’re fighting the tool instead of using it.
The scripting and function-writing capabilities had to extend existing workflows, not replace them. Scientists weren’t going to abandon their Excel pipelines overnight. We needed to augment what they already did, not demand they start from scratch.
None of this would have been obvious from the requirements document. It came from research.
The result? Scientists felt ownership of the product. They’d been consulted, heard, and involved. They weren’t being handed something built for them. They were using something built with them.
And the measurable outcome speaks for itself. The time from data collection to published paper dropped from roughly a year to weeks. That’s not a marginal improvement. That’s a transformation in how science gets done.
Technical Feasibility Workshops: Where Design Meets Engineering
Early design involvement doesn’t mean designers go off and create something in isolation, then hand it to engineering. That’s just moving the handoff earlier. The real value comes when design and engineering work together from the start.
One practical way to do this is the technical feasibility workshop. A structured session early in a project where both disciplines surface constraints together.
When to run one: Before architecture decisions are locked. During discovery or definition phases, when you’re still figuring out what you’re building.
Who’s in the room: At minimum, an engineering lead and someone responsible for design or user research. Ideally also a product owner and, if at all possible, an actual user or customer.
What you cover:
User story walkthrough (designer-led). What is the user trying to accomplish? What does success look like for them? Where are the pain points today?
Technical constraint mapping (engineer-led). What are the hard constraints? What’s expensive or risky? What technical debt already exists that will shape what’s possible?
Overlap identification (collaborative). Where do user needs and technical constraints intersect? What’s both desirable and feasible? What trade-offs are we already facing?
Risk surfacing (collaborative). What happens if we get this wrong? Where are the assumptions we’re most uncertain about?
Decision points. What do we need to prototype or test to reduce uncertainty?
This isn’t a one-time event. Spotify describes their approach as shifting weights. Design and engineering don’t hand off to each other. They shift who’s leading at different phases while remaining in the room together throughout. The designer’s job isn’t just to create designs. It’s to facilitate getting to the best design outcome, which requires engineering input from the beginning.
GetYourGuide calls it “handshake instead of handoff.” The difference is subtle but significant. A handoff implies sequential work. Design finishes, then engineering starts. A handshake implies parallel collaboration. We’re working on this together, even when one of us is leading.
Collaborative Prototyping: Keeping the Feedback Loop Tight
The old model for design-engineering collaboration looked like a relay race. Designers did their work, documented it exhaustively, and threw it over the wall. Engineers picked it up, discovered a dozen reasons why it wouldn’t work as specified, and either built something different or sent it back for revision.
The new model keeps both disciplines in the room together.
Engineers in user research sessions. Not to solve problems, that’s not the point, but to watch and listen. When engineers see real users struggling with a specific interaction, it changes how they think about technical trade-offs. “We should refactor this flow” becomes a hard sell. “In eight out of ten sessions, users failed to complete this task without backtracking, here’s the video” is a different conversation.
Designers in technical spike reviews. When engineering is exploring whether something is feasible, designers should be there to understand what’s possible and what’s expensive. The best design solutions often emerge from understanding constraints, not ignoring them.
Shared prototypes that evolve together. Rather than a finished design spec that gets implemented, there’s a prototype that both sides iterate on. Designers adjust based on technical feedback. Engineers adjust based on user feedback. The artifact gets better faster.
Design systems as living contracts. A good design system isn’t just a library of components. It’s a shared agreement between design and engineering about how things work. When both sides contribute to it and reference it, there’s less ambiguity and less rework.
The contrast with projects that lack this collaboration is stark. I’ve worked on projects where there was no formal design process, where we just talked through requirements and started building. Every single time, the result was the same. Larger iterative cycles. Unclear expectations. And eventually a tense conversation where the customer’s vision turned out to be fundamentally different from what we’d built.
These days I refuse to work on projects that don’t have some element of design thinking baked in. Not because I’m precious about process. Because I’ve learned what it costs when you skip it.
“Engineers Can Figure It Out” (and Other Expensive Beliefs)
When I talk to skeptical engineering leaders about involving design earlier, I hear a few recurring objections.
“Engineers can figure it out these days.” There’s some truth here. Engineering skill sets have expanded. Many developers have better design intuition than their predecessors. Tools have improved. But there’s a difference between being able to make reasonable interface decisions and being skilled at understanding user needs. They’re different disciplines. An engineer can figure out a plausible solution. A designer with research can figure out the right one.
“AI can build it, design just slows us down.” This is increasingly common and it gets the causality backwards. AI removes coding friction. It makes implementation faster. But it doesn’t remove the need to know what to build. In fact, faster implementation makes upfront clarity more valuable, not less. If you can spin up a feature in hours instead of weeks, the bottleneck shifts entirely to knowing which feature to build. Good UX research gives developers a clear understanding of what needs to be built and how interactions should work. AI can still be leveraged for implementation. But research removes the computer-based assumptions from the development cycle.
“We don’t have time or budget for design.” This one I take personally because I’ve lived the alternative.
Early in my consultancy work I took on a project without any structured design process. We discussed requirements regularly. Had good rapport with the client. Thought we understood what they wanted. We were wrong. The gap between their expectations for the user interface and data flow and what we built only became clear late in the project. I made a financial loss on that engagement. Not because the code was bad, but because we built confidently in the wrong direction.
That experience taught me two things. First, verbal discussions aren’t enough. You need structured workshops, visual artifacts, and iterative feedback so the customer sees what’s being built and can flag misalignment early. Second, the “cost” of design isn’t additive. It’s preventive. You’re not spending extra. You’re avoiding waste.
The minimum viable design investment isn’t a full UX team and months of research. It’s someone asking “who is this for and how will they use it?” before any code gets written. It’s regular feedback sessions with real artifacts, not just status updates. It’s treating user understanding as an engineering requirement, not a nice-to-have.
Design as an Engineering Practice
The lesson from PIXL wasn’t that NASA has resources most teams don’t. It’s that treating design as foundational, as a structural part of how you build rather than a finishing touch, changes outcomes in ways you can measure.
Scientists publishing papers in weeks instead of a year. Users who feel ownership because they were consulted. Features that match how people actually work, not how engineers assumed they would work. These aren’t soft benefits. They’re the kind of outcomes engineering leaders care about.
If you’re skeptical, I’m not asking you to overhaul your process. I’m asking you to try an experiment. Before your next project kicks off, ask four questions:
Have we talked to the people who will use this?
Do we understand how they work today, not just what they say they want?
Have we surfaced technical constraints alongside user needs, in the same room, at the same time?
Is someone responsible for holding the user’s perspective throughout the project, not just at the beginning?
If the answer to any of those is no, you’re placing a bet. Maybe it’ll pay off. But the research, and my experience, suggest the odds aren’t in your favour.
When you’re building for Mars, you don’t get many chances to iterate. When you’re building for anyone else, you might get more chances. But each iteration is more expensive than getting it closer to right the first time. Design isn’t the opposite of moving fast. It’s how you avoid moving fast in the wrong direction.



This really hits home. I've seen teams spend months rebuilding features because they assumed what users needed instead of asking. The spectrometry example is wild - going from a year to weeks just by understanding the actual workflow first. It's kinda like building a house without talking to the person whos gonna live in it. You might get the blueprints technically correct but miss that they need wheelchair access or hate open-concept kitchens.