Ernst & Young Exposes 4TB SQL Server Backup on Azure: When Big Four Security Fails

Ernst & Young Exposes 4TB SQL Server Backup on Azure: When Big Four Security Fails

A catastrophic cloud misconfiguration left one of the world's largest accounting firms vulnerable to complete database exfiltration

In a breach that underscores the persistent danger of cloud misconfigurations, Ernst & Young (EY), one of the Big Four accounting giants, accidentally exposed a massive 4-terabyte SQL Server backup file on Microsoft Azure. The unencrypted database, discovered by Dutch cybersecurity firm Neo Security during routine reconnaissance operations, contained the type of sensitive data that keeps CISOs awake at night: API keys, session tokens, user credentials, cached authentication tokens, and service account passwords.

The Discovery: A Digital Vault Left Wide Open

The incident began when a Neo Security researcher conducting passive network traffic analysis stumbled upon an unusually large file during asset mapping exercises. A simple HEAD request—designed to retrieve metadata without downloading content—revealed the staggering scope: 4 terabytes of data, equivalent to millions of documents or roughly 500 to 1,000 full HD movies.

The file's naming convention immediately raised red flags. The .BAK extension is the standard format for SQL Server backup files, which typically contain complete database dumps including schemas, stored procedures, user data, and critically, embedded secrets. Unlike application logs or routine backups, SQL Server .BAK files represent the crown jewels of enterprise infrastructure—everything an attacker needs to understand and potentially compromise an organization's digital ecosystem.

To confirm the file's authenticity without crossing legal boundaries, the researcher downloaded only the first 1,000 bytes. The "magic bytes" signature confirmed their worst fears: an unencrypted SQL Server backup, sitting publicly accessible on the internet.

Attribution: Following the DNS Breadcrumbs

Determining ownership required meticulous detective work. Initial searches yielded merger documents in a south-central European language, revealing the company had been acquired in 2020. The breakthrough came from a DNS Start of Authority (SOA) record lookup—essentially asking the internet's phonebook who controls the domain. The answer pointed directly to an authoritative DNS server: ey.com.

This was no startup or mid-market firm. This was Ernst & Young, a member of the accounting industry's Big Four, with approximately £40 billion in annual revenue and clients spanning the Fortune 500.

The Anatomy of a Cloud Misconfiguration

Neo Security's analysis paints a disturbing picture of how easily catastrophic security failures occur in modern cloud environments. The incident likely originated from a routine database migration scenario familiar to any infrastructure engineer: An administrator needed to transfer a database between environments. The VPN was unreliable. Firewall rules were complex. Under pressure and working late, they made a fateful decision: temporarily set the Azure storage bucket ACL to public, download the file, then immediately restore private access.

Modern cloud platforms make database exports trivially easy—a few clicks, select your database, choose a destination bucket, and the export happens automatically in the background. But this convenience masks a critical vulnerability: one wrong click, one typo in a bucket name, and private data becomes publicly accessible. You intend to export to "company-internal-backups" but accidentally type "company-public-assets." Or you create a new bucket for the export, forget to set it private, and the cloud provider defaults to public access.

The export process is asynchronous, creating a dangerous window of ambiguity. The administrator believes they set permissions correctly, but by the time the file appears in the bucket, those settings may have already been misconfigured. The file sits there—publicly accessible—for an unknown duration. Hours. Days. Potentially weeks.

The Race Against Automated Adversaries

What makes this exposure particularly dangerous isn't the accidental misconfiguration—it's the ecosystem of automated adversaries constantly scanning for exactly these mistakes. Modern botnets can sweep the entire IPv4 address space in minutes, searching specifically for misconfigured cloud buckets, exposed databases, and publicly accessible backup files.

Neo Security referenced a previous fintech breach that started with a database backup accidentally set to public for approximately five minutes. In that narrow window, attackers had already exfiltrated the entire dataset, including personally identifiable information and credentials. The breach led to ransomware deployment and ultimately the company's collapse.

With EY's 4TB backup, the exposure window remains unknown. But in the world of automated scanning, if it was accessible for more than a few minutes, it's safest to assume compromise.

The Responsible Disclosure Nightmare

Discovering the breach was one thing. Reporting it proved nearly as challenging.

Neo Security faced the uncomfortable reality of responsible disclosure over a weekend with no clear security contact. The firm has a comprehensive cybersecurity practice, offering clients threat management, detection, and response services. They prominently feature October as Cybersecurity Awareness Month on their website. Yet Ernst & Young lacked a fundamental component of incident response: a publicly accessible security contact or vulnerability disclosure program.

The researcher resorted to cold outreach via LinkedIn, sending 15 messages before finally connecting with someone who could escalate to EY's Computer Security Incident Response Team (CSIRT). This communication gap is particularly ironic for an organization that advises Fortune 500 companies on cybersecurity strategy.

EY's Response: Swift Remediation, Unclear Accountability

Once notified, EY's response was exemplary. Neo Security praised the firm's handling: "Textbook perfect. Professional acknowledgment. No defensiveness, no legal threats. Just: 'Thank you. We're on it.'" Within one week, the CSIRT had triaged and fully remediated the exposure.

EY's official statement attempted to minimize the incident's scope: "Several months ago, EY became aware of a potential data exposure and immediately remediated the issue. No client information, personal data, or confidential EY data has been impacted. The issue was localized to an entity that was acquired by EY Italy and was unconnected to EY global cloud and technology systems."

This statement raises more questions than it answers. If the incident occurred "several months ago," why was it only recently discovered by external researchers? What monitoring gaps allowed a 4TB backup to remain publicly accessible for an extended period? And how can EY confidently assert no client data was impacted without knowing the full exposure timeline or which external actors may have accessed the file?

The attribution to an acquired Italian entity suggests this may have been a post-merger integration failure—a common but dangerous scenario where acquired infrastructure operates in a security twilight zone before being fully integrated into parent company security frameworks.

The Broader Implications: If EY Can Fail, Anyone Can

Neo Security hammers home the incident's most sobering lesson: "If EY, with all their resources, security teams, compliance frameworks, ISO certifications, and Big Four budget, can have a 4TB SQL Server backup sitting publicly accessible on the internet, then anyone can."

This isn't a story about inadequate security budgets or lack of expertise. Ernst & Young employs thousands of security professionals and maintains extensive compliance certifications. They advise clients on governance, risk, and compliance. Yet they fell victim to the same human error and tooling complexity that affects organizations across the spectrum.

The incident highlights several systemic challenges in cloud security:

1. Convenience vs. Security Trade-offs
Cloud platforms prioritize ease of use and rapid deployment. Export tools assume administrators know what they're doing and rarely warn about potentially dangerous configurations. The path of least resistance often leads to insecure defaults.

2. Asynchronous Operations Create Blind Spots
When export processes happen in the background, administrators lose direct control over the exact moment when data becomes accessible. This temporal gap between action and result creates opportunities for misconfiguration.

3. Attack Surface Complexity
Modern enterprises maintain sprawling cloud estates across multiple providers, regions, and business units. Maintaining comprehensive visibility across this attack surface requires constant, automated monitoring—yet many organizations still rely on periodic manual audits.

4. The Merger Integration Gap
Acquired entities represent high-risk security zones. They often operate with different security standards, lack integration with parent company monitoring systems, and may have legacy infrastructure that doesn't align with corporate policies.

5. Lack of Standardized Disclosure Processes
That Ernst & Young—a firm that should be a model for corporate governance—lacks a clear vulnerability disclosure program represents a broader industry failure. Every organization with an internet presence should have a clearly advertised security contact and escalation process.

What Should Have Been Different

Several preventive measures could have either prevented this exposure or dramatically reduced its impact:

Encryption at Rest: The backup was unencrypted. Even if publicly accessible, encryption would have rendered the data useless to attackers without the corresponding decryption keys.

Automated Compliance Scanning: Cloud Security Posture Management (CSPM) tools continuously scan infrastructure for misconfigurations, including publicly accessible storage buckets. These should be mandatory for any enterprise cloud deployment.

Principle of Least Privilege: Database exports should never default to or allow public access. Organizations should implement restrictive IAM policies that make it difficult or impossible to accidentally expose sensitive data.

Continuous Attack Surface Monitoring: Organizations should maintain their own external reconnaissance capabilities—essentially attacking themselves before adversaries do. Discovering your own exposures before threat actors is the only defensible position.

Clear Disclosure Channels: Every organization should maintain a security.txt file, vulnerability disclosure program, and 24/7/365 security contact. The harder researchers have to work to report vulnerabilities, the more likely those vulnerabilities will be exploited before they're fixed.

Post-Merger Security Integration: Acquired entities should be subject to accelerated security assessments and rapid integration into parent company security frameworks. The "EY Italy acquired entity" gap represents a known risk category that should trigger heightened monitoring.

The Time Bomb Metaphor

Neo Security titled their writeup "The 4TB Time Bomb," and the metaphor is apt. SQL Server backups don't gradually degrade in sensitivity—they represent a complete snapshot of operational infrastructure at a moment in time. API keys, session tokens, and credentials don't become stale quickly. An exposed backup represents a persistent attack vector that can be exploited days, weeks, or months after discovery.

Moreover, the incident response timeline matters less than the initial exposure duration. EY may have remediated the misconfiguration within a week of notification, but the backup may have been accessible for far longer. In modern threat landscapes, automated scanning ensures that exposure and exploitation are nearly simultaneous.

Lessons for Security Practitioners

This incident should serve as a forcing function for security teams across industries:

1. Don't Trust Cloud Default Settings: Review and harden default configurations for every cloud service. Assume defaults prioritize convenience over security.

2. Implement Defense in Depth: No single control should stand between sensitive data and public exposure. Combine encryption, access controls, network segmentation, and monitoring.

3. Prioritize External Visibility: You can't defend what you can't see. Maintain continuous external reconnaissance to discover exposures before adversaries do.

4. Make Responsible Disclosure Easy: If researchers struggle to report vulnerabilities, you'll only hear about them after exploitation.

5. Plan for Human Error: The most sophisticated security frameworks fail if they don't account for human mistakes under pressure. Design systems that are difficult to misconfigure.

6. Treat Mergers as Security Events: Acquired entities should trigger incident-level security assessments and accelerated integration into security monitoring systems.

The Unanswered Questions

Despite extensive reporting, several critical questions remain:

  • How long was the backup publicly accessible before discovery?
  • How many unique IP addresses accessed the file?
  • What specific data was contained in the backup?
  • Were any indicators of compromise identified suggesting adversarial access?
  • What changes has EY implemented to prevent recurrence?
  • Why did it take months before external researchers discovered an exposure EY claims to have known about?

The lack of detailed post-incident transparency suggests EY may still be investigating the full scope of the breach—or may prefer to keep those details confidential. Neither scenario is reassuring for clients whose data may have been included in the exposed backup.

Conclusion: The Persistence of Configuration Risk

The Ernst & Young Azure exposure isn't just another data breach—it's a case study in how organizational complexity, human error, and cloud convenience create persistent vulnerabilities even in the most well-resourced enterprises.

As Neo Security's researcher noted, this wasn't a sophisticated nation-state attack or zero-day exploit. It was a simple misconfiguration, the kind that happens thousands of times daily across cloud environments worldwide. The difference is that most of these errors affect smaller organizations with smaller datasets and lower public profiles. When a Big Four accounting firm exposes 4TB of data, it becomes a watershed moment that forces the industry to confront uncomfortable truths about cloud security maturity.

The lesson isn't that cloud platforms are inherently insecure—it's that their power and flexibility create new categories of risk that require new defensive strategies. Traditional perimeter security, compliance checklists, and periodic audits aren't sufficient. Organizations need continuous, automated visibility into their attack surface, combined with defense-in-depth architectures that assume breaches will occur and minimize their impact.

Ernst & Young will likely recover from this incident. Their brand is strong, their client relationships are deep, and their response was professional. But the exposure serves as a reminder that in cybersecurity, reputation and resources provide no immunity from basic operational failures. The next 4TB time bomb could belong to any organization—including yours.


Timeline of Events:

  • Unknown date: 4TB SQL Server backup becomes publicly accessible on Azure
  • "Several months ago" (per EY): EY claims awareness of potential exposure
  • Recent (exact date unknown): Neo Security discovers exposure during routine reconnaissance
  • Weekend following discovery: Researcher conducts 15 LinkedIn attempts to reach EY security team
  • Within one week: EY CSIRT triages and remediates the exposure
  • October 29, 2025: Public disclosure of incident

Key Takeaway: If Ernst & Young—with its vast resources, security expertise, and compliance certifications—can accidentally expose 4TB of unencrypted database backups on public cloud infrastructure, every organization faces similar risk. The question isn't whether your organization has misconfigurations, but whether you'll discover them before adversaries do.

Read more