Security researchers Ian Carroll and Sam Curry were curious about McDonald’s AI hiring chatbot. They started by applying for a job. Thirty minutes later they had full administrative access to virtually every job application McDonald’s had ever received.
They didn’t use a zero-day exploit. They didn’t run a sophisticated social engineering campaign. They typed “123456” into a login field.
That single default password, combined with a basic API vulnerability, exposed the personal data of 64 million job applicants across McDonald’s 40,000+ global restaurants. No sophisticated attack. No nation-state adversary. Just a forgotten test account and the world’s most common password.
What Olivia is, and what she was hiding
Olivia is an AI-powered hiring chatbot developed by Paradox.ai and branded as “McHire” for McDonald’s use. She screens applicants, schedules interviews, collects resumes, and administers personality tests. 90% of McDonald’s franchisees use her. For millions of hourly job seekers, Olivia is the first point of contact with one of the world’s largest employers.
Behind Olivia’s conversational interface was a backend administration portal that franchise owners use to review applicants. Carroll and Curry found it while poking around after noticing that Olivia’s responses often looped nonsensically. They looked at the backend. They found a login page. They tried “admin.” It didn’t work. They tried “123456.”
It worked immediately.
What they found inside
The researchers spent thirty minutes in total. They reported it to Paradox.ai who fixed both issues within hours. Paradox stated that no candidate information was leaked online or made publicly available by malicious actors. What remains unknown is whether anyone else found the same door before Carroll and Curry did and chose not to announce it.
The two failures that caused it
Paradox.ai acknowledged this was “an old, unused test account that should have been decommissioned.” It wasn’t. It sat in production, connected to a live database containing 64 million records, protected by the world’s most common password and no multi-factor authentication.
Insecure Direct Object References are one of the most well-documented API vulnerabilities in existence. They appear on the OWASP Top 10. Any penetration test worth its scope would have caught this in under an hour. The McHire API had never been tested.
Why this matters far beyond McDonald’s
McDonald’s didn’t build McHire. They licensed it from Paradox.ai and deployed it across 40,000 restaurants. The security failure wasn’t inside McDonald’s own systems. It was inside their vendor’s platform. A vendor who had never run a penetration test on the API that sat between their admin portal and tens of millions of records.
Enterprise AI adoption grew 187% between 2023 and 2025. Security spending on those AI systems increased 43%. That gap is where breaches like this live.
Every startup deploying an AI tool for hiring, customer support, sales, or any function that touches user data is in the same position McDonald’s was in. The tool is useful, you connected it, your users trust you with their data, and the security review of that tool consisted of reading the vendor’s marketing page.
Kobi Nissan, CEO of MineOS, put it precisely: “Any AI system that collects or processes personal data must be subject to the same privacy, security, and access controls as core business systems. That means authentication, auditability, and integration into broader risk workflows, not siloed deployments that fly under the radar.”
The checklist that would have stopped this
-
✓
Decommission all test accounts before production launch. If it has access to production data, it must have production-grade credentials or it must not exist. -
✓
MFA on every admin interface, with no exceptions for test or legacy accounts. One additional step would have made the password irrelevant. -
✓
A penetration test scoped to every API endpoint before deployment. IDOR is on the OWASP Top 10. Any competent VAPT would have found and flagged it. -
✓
Vendor security review before connecting any AI tool to user data. A current SOC 2 Type II report, a penetration test summary, and a data processing agreement are the minimum before giving a vendor access to your users’ personal information.
The actual lesson
This breach was not sophisticated. It was not stealthy. It did not require an advanced persistent threat. It required a researcher with thirty minutes, a browser, and the willingness to try a password that should not have existed.
The reason it went undetected for years — Paradox.ai said the test account “should have been decommissioned” — is that nobody ever went looking. No penetration test was run on the McHire API. No audit of live credentials was performed. No check of whether test accounts had been removed before the platform connected to tens of millions of real users’ records.
The same invisible exposure exists in every startup that has deployed an AI tool for a business function, trusted the vendor’s badge, and moved on. The question is whether a researcher finds it first or someone with different intentions does.
Every AI tool your team connects to user data is a vendor security review waiting to happen.
Osto runs penetration tests scoped to your API layer, reviews your vendor connections, identifies exposed admin interfaces and legacy accounts, and deploys the continuous monitoring that means you find these issues before a researcher or attacker does.
The McHire breach took thirty minutes to find and would have taken less than a day to prevent. That math should not be this lopsided.

