PAM has been running on .NET since 2013. That's a long time in the .NET ecosystem — long enough to have lived through the framework-to-core transition, the annual release cadence, the move to LTS versions, and the introduction of capabilities that simply didn't exist when the platform was first built.
Runtime choices are not purely technical decisions. They're investment decisions. A runtime that loses vendor support, loses the ecosystem around it, or requires expensive migration to maintain affects the platform's economics for years. Getting this right over a decade of continuous operation requires a deliberate upgrade strategy — not chasing versions, and not ignoring them either.
Where we started: .NET Framework
PAM started on .NET Framework 4.5. In 2013, this was the obvious choice. .NET Core didn't exist yet. ASP.NET MVC was mature, Entity Framework was the standard ORM, and the Windows deployment model (IIS + Windows Services) matched the operational requirements of the operators we were building for.
The technology choices from 2013 are visible in the architecture today: IIS hosting, Windows Services for background processing, SQL Server as the primary data store, and an MVC-based back office. These weren't wrong choices — they were appropriate for the environment and have proven durable. The deployment model that runs PAM in production today is recognizably descended from the 2013 architecture.
The framework-to-core transition
The hardest version decision in PAM's history was moving from .NET Framework to .NET Core — which became .NET 5, then .NET 6, then the unified platform.
A platform that's been running live operations for a decade doesn't migrate lightly. .NET Framework to .NET Core isn't a recompile — it requires evaluating every dependency, every platform-specific API call, every deployment assumption. For a system with 40+ business modules, 16+ integrations, and continuous live deployments, a migration that breaks something in production is unacceptable.
The decision was made deliberately: the current generation of PAM was rebuilt targeting .NET (not Framework) from the beginning — not migrated line by line from the existing codebase, but restructured with the lessons of the previous decade built in. The business logic, domain knowledge, and compliance rules accumulated over 10 years are preserved. The infrastructure layer was rebuilt with modern capabilities as first-class design inputs.
Why .NET 10
We don't upgrade versions because they're new. We upgrade when the platform genuinely benefits. .NET 10 cleared that bar on three dimensions:
EF Core performance. Entity Framework Core 10 includes substantial improvements to query translation and bulk operation performance. PAM's high-volume query paths — wallet balance reads, audit log queries, player segment evaluations — benefit directly. The improvement is measurable in production without touching application code.
OpenTelemetry integration. .NET 10's built-in OpenTelemetry support is meaningfully better than earlier versions. Traces, metrics, and logs integrate cleanly with the Aspire dashboard without custom instrumentation setup. For a platform that needs end-to-end visibility across API, business logic, background services, and integrations, observability that works by default is worth upgrading for.
Long-term support. .NET 10 is an LTS release. Operators running live operations in regulated markets need confidence that the runtime under their platform is supported for years, not months. LTS designation provides that guarantee from Microsoft. Non-LTS versions don't.
.NET Aspire: development-time observability
The most operationally valuable addition in recent versions isn't a runtime capability — it's .NET Aspire, the developer orchestration framework.
Aspire provides a local development dashboard that mirrors production observability: live service health, dependency graphs, distributed traces, and log streams — all in one view, running locally without infrastructure setup. When a developer is working on the deposit flow and wants to trace what happens from the API endpoint through the business logic, down to the database query, and out to the integration callback, they can see the full trace in the Aspire dashboard without any additional tooling.
Before Aspire, understanding how a request flowed through a multi-service system required either extensive logging, a separate tracing setup, or careful manual inspection. With Aspire, the trace is visible by default during development — the same trace structure that runs in production. Bugs that previously required a production deploy to diagnose can be identified locally, in a reproducible environment, before they reach production.
OpenTelemetry in production
PAM ships with full OpenTelemetry instrumentation across every service. This isn't an add-on — it's a first-class architectural requirement in the current generation.
Every API request carries a trace ID. That trace propagates through the API layer into the business logic, down to the database query, and out to any external integration call. If a player's deposit is slow, the trace shows exactly where the time was spent: in the payment provider's response time, in the bonus evaluation logic, or in a database query that needs an index. The diagnostic path is the trace — not log file archaeology.
Correlation IDs flow from the player's browser through every layer of the stack. When a support agent asks "what happened to this transaction?", the answer is a trace ID lookup — not a manual search across multiple log files from different services.
The deployment model: IIS and Windows Services
PAM runs on IIS (web/API) and Windows Services (background processing). This is not the cloud-native container model, and it's intentional.
Operators in regulated markets often have specific infrastructure requirements — on-premises deployments, specific audit controls, familiar operational tooling. IIS and Windows Services are well-understood, well-supported, and operate within the compliance frameworks operators already have. They also integrate cleanly with the AWS services PAM uses in production: RDS (SQL Server), Amazon MQ (RabbitMQ), ElastiCache (Valkey/Redis), and SES.
The architecture is cloud-compatible — the event-driven design, the Redis backplane, the RabbitMQ messaging layer are all cloud-ready. The current deployment model isn't a constraint; it's a deliberate fit to where our operators are today.
What a decade of .NET teaches you
The most important lesson isn't technical. It's organizational: the team that understands why a decision was made is more valuable than the decision itself.
PAM has been built by the same core team across its entire history. When the ORM behavior changed between EF Core versions and a query started returning different results, the person who fixed it knew why that query was written the way it was — because they wrote it. When the RabbitMQ connection handling needed to change for a new deployment topology, the person who made the change understood the full message flow, because they designed it.
This institutional knowledge doesn't appear in the codebase. It's in the team. And it's the most durable asset the platform has — more durable than any specific technology choice, because technology choices can be changed. Accumulated domain understanding can't be transferred by documentation alone.
PAM runs on .NET 10, EF Core 10, with full OpenTelemetry instrumentation and .NET Aspire for development orchestration. The background services run on Windows as deployed services. The web and API layers run on IIS. The AWS infrastructure handles persistence, messaging, and caching. The stack is deliberately unsexy — proven, well-supported, with large talent pools and long support windows. These are the right properties for a platform that has to be running ten years from now.
The principle
Runtime is not an afterthought. The choice of runtime affects performance characteristics, the talent pool you can hire from, the support window you can commit to operators, and the observability tools available to your team. Getting these decisions right — and revisiting them deliberately when the landscape changes — is part of platform stewardship.
We don't chase versions. We upgrade when the platform benefits. The test is simple: does this change make the platform more reliable, more observable, more maintainable, or more performant for the operations it runs? If yes, we upgrade. If the answer is "because it's new," we wait.