Opinion

“The responsibility for ethical AI does not rest with IT departments alone, it’s a leadership issue.” – Mounir Hijazi, TP

Mounir Hijazi, Chief Executive Officer, GCC Region at TP (Teleperformance) has highlighted that the trend of consumers who believe AI poses a serious threat to privacy is on an upward trajectory, whilst he also argues that ethical AI is a leadership issue in an exclusive op-ed for tahawultech.com

One of the growing paradoxes of innovation is that the smarter technology becomes, the more it invites scrutiny.

Artificial intelligence has reshaped the financial services industry by enhancing customer experiences, optimising decision-making, and transforming internal operations.

But with this progress comes a sharper focus on the privacy risks embedded in these systems. The GCC’s financial ecosystem, undergoing rapid digital acceleration, is increasingly aware that the future of innovation depends not only on what AI can do, but also on how responsibly it is applied.

AI now powers many critical parts of the banking and insurance value chain, from onboarding journeys to fraud detection, from credit scoring to personalised offers.

Regional institutions are actively embracing technology to gain efficiency and agility. At the same time, they face an essential challenge that goes beyond infrastructure and data readiness. They must earn and maintain the trust of customers who are becoming more aware and more selective about how their data is handled.

Global sentiment reflects this shift. 57% of consumers say they believe AI poses a serious threat to their privacy, while 61% are wary of trusting AI systems.  These concerns have real consequences for how customers choose financial partners, how regulators evaluate risk, and how institutions build resilience in a digital economy.

For leaders in financial services, this is not a conversation about barriers to progress. It is a call to design smarter systems that are ethical by nature. The question is not whether to use AI, but how to use it with integrity.

This requires a shift in mindset, one that places privacy at the centre of innovation. In doing so, financial institutions in the GCC can set a benchmark for how to lead in the next era of digital trust.

Privacy-by-design as a strategic foundation

In highly regulated industries such as BFSI, the historical view of data privacy has been closely tied to compliance. While meeting regulatory standards remains essential, the nature of AI demands more than baseline governance.

What is needed now is a privacy-by-design framework that anticipates risks before they arise and builds protection into every layer of an AI system.

This includes examining how data is sourced, how consent is obtained, how algorithms make decisions, and how those decisions are explained to both customers and regulators.

Successful digital transformation is rooted in continuous improvement and cultural readiness. These qualities matter just as much in the domain of privacy and data ethics as they do in customer experience and operational agility.

In the GCC, where national strategies increasingly prioritise data sovereignty and AI governance, institutions have an opportunity to lead by example. Across the region, data privacy frameworks continue to evolve at varying levels of maturity.

While global benchmarks such as GDPR provide reference points, GCC markets are shaping approaches that balance innovation ambitions with national priorities around sovereignty and consumer protection.

Building trust as a growth driver

Trust is no longer an abstract concept. It is a business asset. Financial institutions that are transparent about how they collect, store, and use personal data are far more likely to earn the confidence of digital-native customers.

In areas such as Open Banking, the potential for AI-driven use cases is significant, yet regulatory clarity is still maturing in parts of the region. This requires institutions to innovate responsibly while remaining agile as frameworks continue to develop.

Our findings confirm that customers expect personalisation and speed, but not at the cost of control. They want AI-enabled services that are intuitive and relevant, but also understandable and respectful.

This means designing customer journeys where individuals know what is being done with their data and feel empowered to opt in or out of specific features without consequence.

Features like AI chatbots, real-time biometric authentication, and predictive financial insights all hold value. But their success depends on how well they are governed, how clearly they are communicated, and how easily customers can interact with them. Institutions that create this level of clarity and control will find themselves not only more trusted, but also more competitive.

Responsible leadership and the role of culture

The responsibility for ethical AI does not rest with IT departments alone. It is a leadership issue. Senior decision-makers must take ownership of data governance, invest in secure systems, and promote a culture where privacy and transparency are considered foundational to innovation.

This includes training cross-functional teams on privacy risks, setting internal standards for algorithmic fairness, and ensuring that third-party vendors uphold equivalent values.

As institutions in the GCC scale their use of AI, leadership must ensure that growth is not achieved at the cost of customer dignity. Responsible use of AI enables smarter decisions and deeper insights, but it must always be guided by human judgment and ethical clarity.

When institutions pair technical sophistication with moral intention, they build systems that are resilient, respected, and ready for the future.

The way forward for financial services in the GCC

As the digital future takes shape, the financial institutions that lead will be those that understand privacy not just as a risk to manage, but as a value to champion. By aligning innovation with ethics, and speed with responsibility, they will define a new standard for customer experience in a data-driven world.

Regulatory sandboxes in key GCC markets provide a constructive path forward, allowing innovators to test new AI applications within supervised environments while frameworks mature.

At the same time, significant investments in smart infrastructure, including hyperscale data centres and advanced computing capabilities, are accelerating the region’s AI ambitions. This places constructive pressure on regulators to craft AI-friendly policies that encourage innovation while reinforcing data sovereignty and customer privacy.

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines