The Biggest Shifts in OWASP Top 10 2025

The Biggest Shifts in OWASP Top 10 2025

This article discusses the OWASP Top 10 2025 update.

There’s a hard truth that usually shows up after a few years of production work, or after one truly nightmarish Friday deployment: your code is going to fail.

It doesn’t matter whether you have perfect unit tests or you wrote everything in Rust. At some point, a disk fills up. A database times out. A dependency returns something you didn’t expect. Maybe your own system does something even stranger because of a race condition you never hit in staging.

On your laptop, you are the localhost wizard: Latency is close to zero, network packets never drop, and the database is always reachable. In production, your code has to deal with a way more complex surrounding. Worst case scenario, parts of your application may even be insecure.

That’s why the 2025 version of the OWASP Top 10 feels a bit different. It’s still a Top 10 list, but the tone is more like, here are the ways modern systems really break, and here is what attackers do when they notice. OWASP is explicit that 2025 includes 2 new categories and a consolidation, and the OWASP top 10 list leans more toward root causes instead of surface symptoms.

But first, here’s the list itself. I’m putting it up front so the exact 2025 numbering stays clear, since the numbers matter when you talk to developers, auditors or security teams.

OWASP Top 10:

  • A01:2025 – Broken Access Control
  • A02:2025 – Security Misconfiguration
  • A03:2025 – Software Supply Chain Failures
  • A04:2025 – Cryptographic Failures
  • A05:2025 – Injection
  • A06:2025 – Insecure Design
  • A07:2025 – Authentication Failures
  • A08:2025 – Software or Data Integrity Failures
  • A09:2025 – Security Logging & Alerting Failures
  • A10:2025 – Mishandling of Exceptional Conditions

Now, the interesting part is what this list says about how software is built in 2025, and what it says about how it fails.

OWASP Top 10 2025

The pivot from symptoms to root causes in the OWASP top 10

One of the biggest themes in 2025 is that OWASP keeps pushing people away from the visible symptom and toward the underlying failure that caused it.

Symptom lists lead to whack-a-mole security. You patch one XSS, then another pops up on a different endpoint. You fix one broken authentication flow, then someone finds a second path that skips the check. Teams get stuck in a loop where they are always fixing, but never getting safer in a sustainable way.

The OWASP Top 10 2025 list still includes familiar problems, but it nudges you to ask better questions.

Instead of: Why did we leak data?
Ask: Why did our access control, crypto, or error handling allow a leak to happen at all?

Instead of: Why did we ship a vulnerable dependency?
Ask: Why can our build pipeline accept untrusted code and publish it as if it is ours?

You can see this root-cause framing in some concrete changes:

  • A01:2025 – Broken Access Control stays at #1, and SSRF is rolled into broken access control. That is OWASP saying, SSRF is not just a weird request trick, it’s an access control failure in a cloud-shaped world.
  • A03:2025 – Software Supply Chain Failures expands the old vulnerable and outdated components framing into supply chain failures across dependencies, build systems, and distribution infrastructure.
  • A09:2025 – Security Logging & Alerting Failures sharpens the focus from we logged it to we detected it and acted on it, because logging without alerting does not help in the moment you need it.
  • A10:2025 – Mishandling of Exceptional Conditions is new, and it names a reality many teams learn the hard way, when systems are under stress, confused, or half broken, security issues appear.

If you build systems, the root-cause angle is good news. It means you can invest in a few platform-level measures that cut off entire classes of bugs.

For example, if you keep getting A05:2025 – Injection findings, you can patch one endpoint at a time. That’s expensive, and it never ends. Or you can make the safer choice, the default by enforcing parameterized queries, pushing encoding and sanitization into shared libraries, and banning string-built SQL in code review. Those moves don’t just patch, they change the shape of the whole codebase.

Same story for A01:2025 – Broken Access Control. If you keep seeing access issues, you can keep adding if-statements and hope nobody misses one. Or you can centralize authorization, enforce deny by default, and make ownership checks part of your domain model. That’s what fix the cause, not the symptom looks like in practice.

This is the first shift: the OWASP Top 10 2025 reads less like a bug list and more like a guide to the deeper issues that keep showing up in modern software.

Infrastructure-as-code is the new security battleground (A02:2025)

A02:2025 – Security Misconfiguration, moved up to #2. The reason is simple and painfully familiar, more of an application’s behavior is now based on configuration.

A decade ago, behavior lived mostly in code. In 2025, behavior is split across code, YAML, Terraform, CI definitions, feature flag systems, cloud console settings, and secret stores. A bad merge to a manifest can quietly undo months of careful application security work.

Misconfiguration is not a niche issue anymore, it’s one of the common failure modes of the entire stack.

Here are the A02:2025 – Security Misconfiguration failures that show up over and over, in plain language:

  • Default or forgotten credentials. Someone ships a sample config and never disables it. Someone leaves an admin account enabled because they might need it later.
  • Too much information in errors. A stack trace can be a map. It tells an attacker what framework you are using, what modules are present, sometimes even schema names and internal endpoints.
  • Overly permissive permissions. In cloud environments, misconfiguration often means we gave this service too much power. It starts as convenience, then it becomes permanent.
  • Exposed admin panels and debug modes. A feature that was helpful in staging becomes a disaster in production.

A02:2025 – Security Misconfiguration is also where infrastructure-as-code becomes a security topic, not just a platform topic. Tools like Terraform and Kubernetes are powerful, and they make it easy to ship a mistake at scale.

You treat config like code, and you stop trusting humans to catch every mistake by eyeballing diffs.

In practice, that usually means three things:

  1. Move validation earlier. If a risky configuration can reach production, the process is part of the vulnerability.
  2. Enforce safe defaults. If a setting is missing, the secure option should win.
  3. Reduce environment drift. A lot of misconfiguration comes from drift, dev differs from prod, staging differs from both, then a change ships without a clear picture of what it does.

A02:2025 – Security Misconfiguration also ties into A09:2025 – Security Logging & Alerting Failures in a way people forget. If misconfiguration is common, you need alerting that tells you when something changed and what it exposed. Logging changes to security-relevant settings is not enough if nobody sees it.

This is the second shift: the security perimeter is no longer just your code. It is your configuration, your infrastructure definitions, and every toggle that can quietly override your intentions.

The supply chain is now part of the app (A03:2025 and A08:2025)

A03:2025 – Software Supply Chain Failures, is #3, and the underlying idea is simple. Attackers go where the leverage is.

It’s often easier to compromise one widely used package than to hack thousands of companies one by one. It’s often easier to compromise one CI step that runs in countless pipelines than to find a unique exploit for each target.

This is why the boring advice is now the correct advice, treat your build inputs like production dependencies, and don’t assume upstream is stable just because it’s popular.

From there, teams typically go further:

  • Generate an SBOM (software bill of materials) so you have an inventory you can trust. Tools in this space include OWASP Dependency-Check and Dependency-Track, but the bigger point is the capability, not the brand name.
  • Sign builds and verify signatures at deployment. This is where Sigstore and tools like Cosign show up in real pipelines. The goal is that production rejects artifacts that cannot prove where they came from.
  • Lock down CI secrets. Secrets should be scoped, short-lived where possible, and never printed to logs. Logs are not private by default, especially in shared systems.

Now, it’s important to separate A03:2025 – Software Supply Chain Failures from A08:2025 – Software or Data Integrity Failures, because 2025 makes room for both.

A08:2025 – Software or Data Integrity Failures, is still about trust, but it is more about integrity boundaries inside your system. If A03:2025 – Software Supply Chain Failures is the pipeline got compromised, A08:2025 – Software or Data Integrity Failures is we accepted an artifact, update, or data blob without verifying it was valid and untampered.

A08:2025 – Software or Data Integrity Failures shows up in patterns like:

  • Accepting a JSON blob from a queue and assuming it is safe because it is internal.
  • Loading a model file, plug-in, or serialized object without verifying what it is and where it came from.
  • Treating service-to-service traffic as trusted just because it is on a private network.

This is the third shift: your application boundary is no longer your repository. It includes build pipelines, dependency registries, CI actions, artifact stores, and integrity checks between internal services.

Resilience beats perfection (A10:2025 and its friends)

A10:2025 – Mishandling of Exceptional Conditions, is new in 2025. The category is about improper error handling, logical errors, and failing open under abnormal conditions.

This is OWASP saying, Security is not only about what your system does when everything is normal. It is also about what it does when everything is on fire.

A lot of systems are secure on the happy path. Then the auth service times out, the cache is empty, the database is slow, or an API returns unexpected data, and suddenly the system starts making choices no one intended.

A10:2025 – Mishandling of Exceptional Conditions is also where error messages become a security topic. Teams talk themselves into bad outcomes here. Someone says, Let’s include the full stack trace, it helps support. And yes, it helps support, and it also helps attackers do recon. A better compromise is a user-facing error with a correlation ID, and the detailed stack trace stored internally with proper access controls.

A10:2025 – Mishandling of Exceptional Conditions is also where retry storms live. This happens when a downstream service blips and your code retries immediately, with no backoff, no jitter, and no upper bound. Under load, you can accidentally create a denial-of-service problem inside your own system.

The practical mindset is not never fail. It is fail in a way that does not create a security incident.

That usually means guardrails for messy reality:

  • Circuit breakers so you stop hammering a failing dependency.
  • Backoff with jitter so retries spread out instead of piling up.
  • Transaction rollbacks so partial failures do not leave broken state behind.
  • Safe defaults for authorization, because we were not sure should not translate into allow.

And this is where A09:2025 – Security Logging & Alerting Failures connects directly. If your system fails in weird ways and nobody gets notified, you will not respond in time. Logging that sits quietly in storage is not the same as detection.

This is the fourth shift: failing safely becomes part of application security, not just part of reliability engineering.

Identity becomes the real perimeter (A01:2025 and A07:2025)

A01:2025 – Broken Access Control, is still #1. That isn’t surprising, access control failures remain both common and high impact. The consolidation of SSRF into A01:2025 is one of the clearest signals in the update. In cloud environments, outbound requests are not just outbound requests. They can be a way to steal credentials, reach internal services, and move sideways.

A07:2025 – Authentication Failures, stays on the list. The important point is that modern systems cannot rely on network location as a strong signal anymore. Requests can originate from anywhere, and systems are distributed by default. Identity is what holds it together.

That includes human identity, and it includes machine identity, service accounts, API keys, bot tokens, and short-lived workload credentials.

Most teams know the basics for securing human logins, at least in theory. Machine identity is where things get messy. Secrets get copied into configs. Tokens do not rotate. A service account gets broad permissions because someone needed to unblock a deploy six months ago, and then nobody remembers to shrink it back down.

The practical work here is not exciting, but it’s effective:

  • Centralize authorization and reuse it. Access control is easiest to get wrong when it is duplicated everywhere.
  • Deny by default, then allow explicitly.
  • Keep sessions and tokens short-lived, invalidate properly, and reduce the attacker’s window.
  • Treat service accounts like production assets, scoped permissions, rotation, audit trails, and removal when no longer needed.

This is the fifth shift: identity controls are no longer just an auth-screen concern, they are the core structure that keeps a distributed system from falling apart.

The 2025 DevSecOps toolchain has to match the new reality

If you look at the 2025 list, it is obvious you cannot solve these problems with one scanner and a checklist.

  • A02:2025 – Security Misconfiguration is about misconfiguration, so you need tooling that understands infrastructure, configuration, and drift.
  • A03:2025 – Software Supply Chain Failures is about supply chain failures, so you need visibility into dependencies, build steps, and artifact provenance.
  • A08:2025 – Software or Data Integrity Failures is about integrity boundaries, so you need validation and verification inside the system, not only at the edge.
  • A09:2025 – Security Logging & Alerting Failures is about logging and alerting failures, so you need detection and response, not just logs.
  • A10:2025 – Mishandling of Exceptional Conditions is about exceptional conditions, so you need testing and observability around failure paths, not only feature paths.

A practical toolchain ends up looking like layers that cover different parts of the stack:

  • Static analysis and code rules to catch common injection patterns early.
  • Dependency scanning and SBOM generation to support A03, plus the ability to answer, What are we running, exactly?
  • IaC scanning for Terraform and Kubernetes manifests to prevent A02:2025 – Security Misconfiguration mistakes before they get applied.
  • Artifact signing and verification so supply chain controls are enforced at deployment, not just described in a policy document.
  • Runtime monitoring plus real alerting workflows so you do not live inside A09.
  • Chaos and failure-mode testing so the system behaves safely under A10 scenarios, not only in demos.

Then there is the cultural piece, which matters as much as the tools. If teams treat security as a last-minute gate, they will always lose. The 2025 list is basically saying the attack surface is spread across the entire software lifecycle, and your controls have to be spread out too.

This is the sixth shift: DevSecOps has to cover infrastructure, build pipelines, and failure behavior, not just code quality.

OWASP Top 10: Plan for the crash

The best systems do not avoid failure forever. They control what failure looks like.

OWASP Top 10 2025 makes that point without being dramatic. A01:2025 – Broken Access Control reminds you access control still fails constantly. A02:2025 – Security Misconfiguration shows how much damage a configuration mistake can do. A03:2025 – Software Supply Chain Failures and A08:2025 – Software or Data Integrity Failures make it clear your dependencies and pipelines are part of your security story now, and internal integrity boundaries matter too. A09 tells you that quiet logs are not protection. A10:2025 – Mishandling of Exceptional Conditions puts a name on a truth many teams learn the hard way, when systems get weird, security bugs appear.

So here is a simple question to take into your next sprint planning session:

If your authorization service vanished right now, would your application stay shut, or would it try to be polite and let everyone in?

If it is the second one, the fix is not another checklist item. The fix is a design decision. Design for the crash, because the happy path is not the only path that exists, it is just the one you see most often.

OWASP Top 10 and Threat Modeling

Threat modeling turns the OWASP Top 10 into a design tool, not just a checklist. Instead of waiting for pentests to find A01:2025 – Broken Access Control gaps or A10:2025 – Mishandling of Exceptional Conditions error handling weaknesses, threat modeling surfaces these risks during design, when they’re cheapest to fix. Specifically, threat modeling methods such as STRIDE or PASTA can help.

A structured threat model maps your attack surface (entry points, trust boundaries, data flows) and asks what breaks here? for each component. That’s where OWASP categories become concrete. Treat OWASP as the vocabulary for threats you’re actively hunting in design sessions, and you shift from reactive patching to proactive architecture.

OWASP Top 10 2025 Description

Here’s a description of all OWASP Top 10 entries, directly from the official website:

A01:2025 Broken Access Control

Access control enforces policy such that users cannot act outside of their intended permissions. Failures typically lead to unauthorized information disclosure, modification or destruction of all data, or performing a business function outside the user’s limits.

A02:2025 Security Misconfiguration

Security misconfiguration is when a system, application, or cloud service is set up incorrectly from a security perspective, creating vulnerabilities.

A03:2025 Software Supply Chain Failures

Software supply chain failures are breakdowns or other compromises in the process of building, distributing, or updating software. They are often caused by vulnerabilities or malicious changes in third-party code, tools, or other dependencies that the system relies on.

A04:2025 Cryptographic Failures

Generally speaking, all data in transit should be encrypted at the transport layer (OSI layer 4). Previous hurdles such as CPU performance and private key/certificate management are now handled by CPUs having instructions designed to accelerate encryption (eg: AES support) and private key and certificate management being simplified by services like LetsEncrypt.org with major cloud vendors providing even more tightly integrated certificate management services for their specific platforms. 

Beyond securing the transport layer, it is important to determine what data needs encryption at rest as well as what data needs extra encryption in transit (at the application layer, OSI layer 7). For example, passwords, credit card numbers, health records, personal information, and business secrets require extra protection, especially if that data falls under privacy laws, e.g., EU’s General Data Protection Regulation (GDPR), or regulations such as PCI Data Security Standard (PCI DSS).

A05:2025 Injection

An injection vulnerability is an application flaw that allows untrusted user input to be sent to an interpreter (e.g. a browser, database, the command line) and causes the interpreter to execute parts of that input as commands.

A06:2025 Insecure Design

Insecure design is a broad category representing different weaknesses, expressed as “missing or ineffective control design.” Insecure design is not the source for all other Top Ten risk categories. Note that there is a difference between insecure design and insecure implementation. We differentiate between design flaws and implementation defects for a reason, they have different root causes, take place at different times in the development process, and have different remediations. A secure design can still have implementation defects leading to vulnerabilities that may be exploited. An insecure design cannot be fixed by a perfect implementation as needed security controls were never created to defend against specific attacks. One of the factors that contributes to insecure design is the lack of business risk profiling inherent in the software or system being developed, and thus the failure to determine what level of security design is required.

Three key parts of having a secure design are:

  • Gathering Requirements and Resource Management
  • Creating a Secure Design
  • Having a Secure Development Lifecycle

Note that threat modeling and security by design play a part in preventing insecure design.

A07:2025 Authentication Failures

When an attacker is able to trick a system into recognizing an invalid or incorrect user as legitimate, this vulnerability is present.

A08:2025 Software or Data Integrity Failures

Software and data integrity failures relate to code and infrastructure that does not protect against invalid or untrusted code or data being treated as trusted and valid.

A09:2025 Security Logging & Alerting Failures

Without logging and monitoring, attacks and breaches cannot be detected, and without alerting it is very difficult to respond quickly and effectively during a security incident.

A10:2025 Mishandling of Exceptional Conditions

Mishandling exceptional conditions in software happens when programs fail to prevent, detect, and respond to unusual and unpredictable situations, which leads to crashes, unexpected behavior, and sometimes vulnerabilities. This can involve one or more of the following 3 failings; the application doesn’t prevent an unusual situation from happening, it doesn’t identify the situation as it is happening, and/or it responds poorly or not at all to the situation afterwards.

Connect with me

Enter your Email address if you want to connect and receive threat modeling updates (I won’t spam you or share your contact details).

AND / OR

Try my threat modeling tool, it's completely free to use.

Thanks for signing up!