If a UI doesn’t feel instant, it feels broken and users start to question what they’re seeing. A grid lags, values don’t update when expected, or a filter that used to feel instant starts to slow down. In high-demand systems like a trader’s workstation, a single desktop may be running many latency-sensitive applications at once, all competing for CPU, memory, and network bandwidth, so issues can show up quickly.
The instinct is to treat these as frontend issues and start looking at rendering or state management. Most of the time, that’s not where the problem starts.
What the UI is exposing is the combined behavior of the system behind it: how data is fetched, how often it is recomputed, where filtering happens, and how many times the same work is repeated. Each part can look reasonable on its own, and the issues only show up once everything is exercised together under real usage. UI problems are rarely local. In that sense, the UI is less a presentation layer and more a boundary where system decisions become visible.
Where the work actually happensA common pattern in interactive systems is to pull data into the client and handle filtering and aggregation locally. It keeps things flexible and is easy to build on top of, especially when datasets are small.
As usage grows, that same approach starts to shift work into a place that doesn’t scale well. The same dataset gets pulled into multiple views, each filtering slightly differently. Similar computations are repeated, often without coordination. It is common to see multiple windows effectively asking the same question in slightly different ways. What started as a simple interaction now fans out into multiple queries, more data movement, and more client-side processing.
Client-side approach:Nothing breaks, but the system gets heavier. Latency starts to show up in places that previously felt instant, and it becomes harder to tell which part of the system is responsible. This works until it doesn’t, and when it doesn’t, it’s hard to unwind. At that point, small inefficiencies are no longer hidden and show up immediately in how the system behaves.
A more sustainable approach is to move that work upstream. Instead of pulling everything and shaping it locally, the client requests exactly what it needs. Filtering, aggregation, and reduction happen closer to the data source, and only the result is sent back. This reduces load on the client, but it also means performance depends on whether the data layer supports those targeted requests efficiently.
The hidden cost of duplicationDuplication is another issue that doesn’t show up immediately. Multiple users, or even a single user with several windows open, often request overlapping data. The requests are not identical, but they are close enough that most of the work is repeated.
At small scale, this is acceptable. At larger scale, it creates unnecessary load across the system. The same queries hit the same services, and the same transformations are applied multiple times. From the UI, everything still looks normal. From the system’s perspective, it is doing the same work over and over again.
Avoiding that requires treating requests collectively rather than individually, understanding overlap, deduplicating where possible, and routing work in a way that reduces recomputation.
None of that is visible in the UI itself, but the impact is.
Debugging across layersWhen performance degrades, it is rarely enough to inspect the UI in isolation. Understanding what is happening usually means tracing a single interaction end-to-end: where the data originated, how it was transformed, whether filtering was applied before or after it reached the client, and how many times the same query was executed.
Each layer might look healthy when viewed on its own. The issue is how they interact under real usage. This is why debugging UI issues in these systems often turns into debugging the entire pipeline. Without visibility across layers, it is easy to optimize the wrong thing or simply move the problem somewhere else.
Scaling the way UI is builtThere is a second problem that shows up alongside performance. A relatively small group of engineers is responsible for building UI systems, while a much larger group needs to create tools and visualizations. The demand grows faster than the team that owns the infrastructure.
Rebuilding the same components across teams does not scale. It leads to inconsistent behavior, duplicated effort, and systems that diverge over time. With a small number of engineers responsible for the core UI and a much larger group building applications, this becomes difficult to manage quickly.
A more durable approach is to invest in shared abstractions. Instead of each team deciding how to build a grid, apply filters, or manage state, those patterns are implemented once and reused. Developers describe what they need, data, transformations, interactions, and the system handles how it is delivered.
When this works well, small abstractions remove large amounts of repeated work. They also make it possible for many developers to build applications that behave consistently without deep UI expertise.
UI and data are not separate concernsAt this point, the distinction between UI and data becomes less meaningful. Decisions about where work happens, whether data is filtered upstream or in the client, whether queries are deduplicated, whether computation is shared, directly affect how the UI behaves. Likewise, how users interact with the UI determines what the underlying systems need to support.
Treating these as separate layers leads to mismatches. Treating them as parts of the same system makes the trade-offs explicit. The UI is where those trade-offs become visible, not because the UI is the problem, but because it is where the system is experienced.
At Optiver, UI and data aren’t separate layers.
The same teams design both, so performance, behavior, and scale are solved together.
See our openingsUse our AI to tailor your resume for this UI as a Systems Problem position at Optiver.