/Platform engineering, end to end: a conversation with Pat Cooney

Platform engineering, end to end: a conversation with Pat Cooney

United Kingdomgbvia direct
// Job Type
Full Time
// Salary
Not disclosed
// Posted
1 day ago

About the Role

Platform engineering, end to end: a conversation with Pat Cooney

We sat down with Pat Cooney, Head of Platform Engineering, to talk about what Platform Engineering means here and where agentic AI fits into the picture. Pat has spent over a decade at Optiver across markets, regions, and roles.

In one sentence, what is Platform Engineering?

It’s the foundation we build the business on, and that everyone uses every day to be productive and to deliver continuous improvement across our trading.

Why does that matter more now than it did, say, five years ago?

Technology just keeps getting more important: machine learning, data, AI, trading systems. They’re more and more central to the industry and to the firm. The ability to scale as easily as possible, to innovate and iterate quickly — those things are really crucial. And I think that’s a long-term trajectory, in the world and for us.

Knowing the platform is the foundation everyone builds on. How do you make sure it’s something people actually want and not just tolerate?

We’re trying and learning. You have to stay close to what the user is trying to achieve and the best way to support that. Be sensitive to the user’s experience and their productivity. Don’t go away for years and build something in isolation: talk to users, get things in front of them quickly, work out what works and what doesn’t. The test is to behave as if people have a choice, and earn adoption.

And what does that look like day to day?

It’s a mixture of things. There’s always a good amount of direct partnering with teams to deliver their plans, and then a big chunk of working on the platform itself: making improvements, taking out frictions, and working out what the next generation should look like. Daily delivery is a bit of watching a new business launch or iterating on one, and a bit of that longer platform work, with whoever needs to be involved: other platform teams, users, technology teams, vendors, partners.

Was this always the direction, or did it take some bumps to get here?

It’s been a gradual process. We have a history of localising technology across regions. Over time, we felt we were a little subscaled: that we could remove friction and scale better by doing things once and well. That’s taken a few years, ramping up and eventually forming a platform team.

Zooming out a bit, where does developer experience and AI fit in all of this?

It comes back to the trajectory of technology, data, and AI mattering more and more to winning. More and more, people’s daily experience is building systems, working with data, working with AI. So developer experience is people’s daily experience. The more productive we can make people, the more friction we can remove, the better off we all are. Generative AI is increasingly part of that experience too: being able to provide it as a product to the organisation, in a seamless way, is really the goal.

How does agentic AI play into what you just described?

Agentic AI is increasingly just the way people build systems. That could be a coding agent, but it’s also a broader SDLC (Software Development Life Cycle) where agents might be carrying out changes in a production system or taking on some parts of the development process. We’re beginning to use agents more in development, and over time, more and more of how we build will be about defining and orchestrating agents.

That’s interesting.

Yeah, we’re building with AI, but also building for AI. We’re using the tools to build systems and to operate. But we’re also thinking more about how we build a platform that is friendly to AI. Because our users are humans, but they’re also agents. That means the interfaces we expose, the documentation, the context we give to agents becomes more and more important.

Can you share an example of how it’s being used in practice today?

Yeah. Obviously, there’s a bunch of different people writing code with Claude. But looking further along the delivery process, we also have bots integrated speaking with technologists and users and making changes in the environment. They are providing a more natural language interface for people to get things done with technology.

That feels like a real shift. What has to change to make that work?

There’s obviously the technology change where you need to make that available. But there’s also a real learning process. At Optiver, people have just dived in, building things and learning. What comes out of that is the need to externalise more and more of your knowledge, your team’s knowledge, in a way that agents can actually use. You need to think about how you test and oversee those agents, and how you do that in a probabilistic world rather than a deterministic one.

Does that change the risk profile at all?

Any change brings risk. I think one of the key things here is being very open-minded about not just how agents can create risk, but also how they can be used to mitigate it. It’s not asymmetric: use AI and risk goes up. AI is a tool you can use anywhere, including to manage and reduce risk.

Looking ahead, let’s say a year from now, what shifts do you expect to see?

It’s a brave person who looks a year out: things are changing that fast. I think what is obvious is that people’s roles will shift more from doing things hands-on to thinking about how to decompose different workflows and ultimately reach end goals by instructing and orchestrating agents. It’s an upper-level shift where in a sense, everyone becomes a manager of agents and is constructing a team rather than being the doer themselves. Exactly when that happens, for which companies and which teams, that’ll vary a lot. But that’s where we are heading.

If someone only takes one thing from this, what should it be?

First of all, it’s a really exciting time: in AI, in markets, at Optiver. It’s never been more interesting. We’re working furiously to build the platform to make that work better and better for the firm.

Last one, putting work aside: any pet peeves you’d like to own?

Oh, man, I don’t know. Is it too AI-centric to talk about how it is so close, but just not there yet sometimes. I spend way too much time trying to correct my prompts to get it there. Especially with knowledge work: it’s really good at code, not so good at that. That’s probably my biggest gripe right now.

Interested in this job?

Login to Apply

Use our AI to tailor your resume for this Platform engineering, end to end: a conversation with Pat Cooney position at Optiver.