top of page

AI at Scale Starts with Leadership: Asif Menghrani on trust, execution and responsible growth

  • 17. März
  • 4 Min. Lesezeit
In conversation with Asif Menghrani, Advisory Board member at Jester Advisory
In conversation with Asif Menghrani, Advisory Board member at Jester Advisory

For CEOs, the AI agenda has moved beyond experimentation. The challenge now is to turn digital ambition into business value while protecting trust, strengthening execution and keeping accountability clear. In this interview, Asif Menghrani shares his perspective on how leaders can use values as a real decision framework, why psychological safety matters in high pressure environments, and what executive teams must do differently to scale AI responsibly.


Values only matter when pressure tests them


In fast-moving organisations: what does values-based leadership look like at CEO and executive level, and how can leaders make it visible in everyday decisions while still protecting psychological safety?


Asif Menghrani: Values-based leadership becomes real when leaders use values as a decision framework, particularly under pressure. At CEO and executive level, that means translating purpose and principles into consistent leadership behaviour and applying the same standards in difficult moments as in calm ones. Leaders make this visible when they explain decisions transparently, invite challenge and show that values are not symbolic, but actively guide action. This becomes far more credible when organisations define clear leadership principles and behavioural expectations, so strategy, culture and daily decision making remain aligned. But this cannot be left to the CEO alone. Boards have a direct responsibility to reinforce tone at the top by ensuring that leadership evaluation, incentives and governance discussions reflect these principles, not just financial outcomes.

Psychological safety is an operating condition for performance


High-performing teams need speed, trust and open dialogue. How can senior leaders strengthen psychological safety so people speak up early, challenge assumptions and raise risks without fear?


Asif Menghrani: Psychological safety is not about comfort, it is about trust, clarity and respect. Leaders strengthen it by separating people from ideas, so teams can challenge assumptions, raise risks early and offer alternative perspectives without fear of personal consequence. Three behaviours make a tangible difference: rewarding early signals, actively inviting dissent and visibly closing the feedback loop when concerns are raised. In many organisations, psychological safety varies across teams, so leaders need to understand where trust and dialogue break down and address those gaps systematically. Boards matter here as well. When they encourage transparency around risks and challenges, and signal that early escalation is a sign of maturity rather than weakness, they help create the conditions for the same behaviour throughout the organisation.

In AI, the real risk is silence


When AI or digital initiatives go wrong: how should executive teams handle mistakes to create a strong learning culture where learning is faster than blame, but accountability stays clear?


Asif Menghrani: In digital and AI initiatives, mistakes are an inevitable part of the journey. The key is to distinguish between learning failures and negligent failures. Learning failures happen when teams experiment responsibly and encounter unexpected outcomes, and these should be analysed openly and turned into shared learning across the organisation. Negligent failures are different because they reflect ignored standards or weak governance, and in those cases accountability must remain clear. Organisations that transform successfully create structured ways for leadership teams to reflect on setbacks, capture insights and strengthen decision-making for the future.

Execution slows when friction replaces ownership


If the digital or AI strategy is clear but execution slows down: which cultural patterns most often block high-performing teams, especially around trust, decision making and collaboration?


Asif Menghrani: Execution rarely slows because of strategy. It slows because of organisational friction. Three patterns appear repeatedly: unclear decision rights, low trust across functions and a culture where visible failure carries reputational risk. When these dynamics are present, teams escalate decisions unnecessarily, collaboration weakens and innovation slows. The most effective first step is often clarifying decision ownership and accountability, because when roles and responsibilities are transparent, teams regain momentum quickly.

Scaling AI responsibly takes more than technology


What should CEOs and executive teams do differently to scale data and AI responsibly, while building psychological safety, strong performance and sustainable team effectiveness?


Asif Menghrani: Scaling AI responsibly requires organisations to align market offerings, data, technology, governance and culture. Technology alone is insufficient, because without clear standards, strong accountability and close collaboration, AI creates exposure instead of value. CEOs therefore need to frame AI not only as a technical transformation, but as a leadership and organisational one. That means building environments where teams can experiment responsibly, challenge assumptions early and raise concerns before risks become costly. The organisations that scale AI effectively are usually the ones that combine strong governance with the trust and openness required for continuous learning.

Conclusion


Asif Menghrani’s perspective is clear: successful AI transformation depends less on tools than on leadership quality. For CEOs, the real task is to create an organisation where values, culture and governance reinforce each other rather than compete. But culture reform cannot be delegated downward and left with the executive team alone. If the board does not actively shape tone at the top, leadership standards and long term accountability, both strategy and culture lose traction where they need the strongest sponsorship. The role of the board is therefore not secondary, but fundamental. When boards and executive teams jointly create clarity, trust and disciplined accountability, AI becomes easier to scale and far more likely to deliver sustainable value.

About Asif Menghrani


Asif Menghrani is a seasoned Non Executive Director and digital business transformation advisor with more than three decades of international experience across Europe, Asia and the United States. He supports boards and executive teams in turning transformation agendas into measurable business value, with a particular focus on digital execution, leadership effectiveness and cross functional collaboration. Over the course of his career, he has contributed to major transformation initiatives for organisations including Novartis, PwC and UBS. Today, he advises leaders on how to align strategy, governance and culture in fast moving environments and serves as an Advisory Board Member at Jester Advisory.



 
 
bottom of page