Can anyone catch up with Mastercard’s $7B AI? It will take some PETs.

Brynn Moynihan
February 7, 2024

Mastercard just showed how a massive war chest can translate into huge first-mover advantage in AI. Here’s how companies can level the playing field – by using the right data security.

Mastercard just showed how a massive war chest can translate into huge first-mover advantage in AI. Here’s how companies can level the playing field – by using the right data security.

No items found.
No items found.

Mastercard, the second-largest payment-technology corporation worldwide, just announced the launch of its own proprietary artificial intelligence (AI) model to help thousands of banks in its network detect and root out fraudulent transactions. 

It’s great to see that prominent payment-technology companies are making serious investments in solving the fraudulent transaction problem. In 2022, annual global fraud losses (credit & debit card) reached $34 billion. AI is a clear solution.  

  • This AI model enables banks to identify fraudulent charges more accurately–a 20% improvement on average, and as much as a 300% improvement for some banks–and efficiently, cutting the cost of fraud detection by 20%. 
  • Meanwhile, this improvement is great news for banks and Mastercard holders, including its nearly 293 million credit card holders and 311 million debit card holders in the US. The fraud detection ensures that cardholders have better protection against fraudulent charges, and faster stolen card information alerts. 

But will other financial services be able to catch up to Mastercard’s AI savvy? Only if they can get a handle around the security risks inherent to AI. 

The high costs of securing generative AI 

Generative AI opens enormous opportunities for businesses — but also poses risks of exposing data and IP as information passes between end-user, the business, and the algorithm which, often, is owned by a separate organization. Businesses face a real generative AI dilemma: embrace AI and take real security risks, or stay on the sidelines and lose out on innovation and an AI competitive edge. Given the risks, many leading companies have decided to limit their generative AI use, among them Samsung, Apple, Bank of America, and JPMorgan. 

By building its own AI solutions in-house, Mastercard has sealed its AI off from that exposure. It’s also given itself an enormous AI competitive edge. Very few companies have the resources to invest in a proprietary AI model that manages roughly 125 trillion transactions annually, plus associated data. To pull it all off, Mastercard has invested $7 billion in cybersecurity and AI technologies over the last 5 years – money that few other organizations can spare. 

Making AI security cost-effective with PETs

So can rivals (or others) with smaller AI war chests ever keep up? The answer is yes – with the help of a class of security systems called privacy enhancing technologies, or (PETs). PETs encrypt and protect IP and data before deploying AI models – allowing all the information that needs to be protected to stay fully in-house, and virtually impenetrable encrypted information to pass along. Companies can build their own AI models off of pre-existing solutions – a far cheaper option – while keeping sensitive information as guarded as fully in-house AI. 

To learn more about how companies like Pyte are using PETs to make AI secure, read our analysis here

Popular articles


What Is True Interoperability In Data Collaboration?

Interoperability isn't binary, it's a spectrum, Sadegh Riazi argues in his latest for Forbes Technology Council. Read why, and wha

Sadegh Riazi
March 26, 2024

Pyte Selected as an MIT Sloan CIO Symposium Finalists for 2024 Innovation Showcase

Pyte was selected after careful consideration bythe Innovation Showcase judges because our enterprise IT solution shows innovation

April 24, 2024

It’s Time to Open Up he Clean Room

Clean rooms offer the premise of more secure data collaboration – but their closed-approach architecture can fall short.

Sadegh Riazi
October 3, 2023