Managers who sign off on AI-assisted hiring decisions, AI-generated forecasts, or AI-curated customer data without the ability to evaluate those outputs are accumulating liability they cannot measure.
This is the argument that Phaneesh Murthy, founder of Primentor and a technology executive with more than three decades of large-scale enterprise experience, has been advancing with growing specificity. AI fluency, in his framing, describes the minimum literacy that responsible management now requires in organizations where AI shapes hiring, performance measurement, forecasting, and strategic planning. Technical expertise and AI fluency are distinct skills, and the gap between them is where most organizations run into trouble.
“You do not need to build the machine,” Murthy has said. “But if you lead people who use it, you must understand what it can and cannot do.”
The Three Dimensions of Phaneesh Murthy’s AI Fluency Framework
Murthy breaks AI fluency into three operational components, each of which maps directly to a management function rather than a technical one.
The first is capability awareness. AI does specific things well: pattern recognition at scale, automated processing of high-volume repetitive tasks, anomaly detection across large datasets, probabilistic forecasting from structured inputs. Managers who grasp this can identify genuine opportunities to extend organizational capacity. Those who don’t tend to either miss high-value applications or set expectations for AI that no system could meet.
The second is limitation awareness. Generative models produce false information with apparent confidence. Training data carries historical bias into future decisions. Output quality is directly tied to input quality, and in most organizations, inputs receive far less scrutiny than outputs. A manager who accepts AI outputs uncritically in any of these conditions is transferring risk without any ability to measure it.
The third is strategic implication. AI increases the volume of data and options available to decision-makers. It doesn’t reduce the judgment required to choose among them. When forecasting tools surface multiple scenarios, someone still has to decide which direction fits the organization’s actual goals and constraints. That function belongs to management.
Murthy has been direct about where this leads: “Blind faith in AI is as dangerous as blind resistance to it.”
How Phaneesh Murthy Frames AI Fluency and the Manager’s Role in Work Design
One of AI’s most immediate practical effects is on task allocation. Activities that once required substantial human effort can now be AI-supported: scheduling, reporting, content drafting, preliminary data analysis, customer segmentation. For managers, this creates a real question about what human effort should be directed toward once those tasks shift.
Murthy’s answer is specific: “The manager’s role is not to compete with AI. It is to elevate people beyond what AI can do.” That means redesigning roles with intention rather than watching AI absorb tasks and leaving workforces to figure out what remains. Managers who do this well move human effort toward synthesis, ethical evaluation, and the relationship-dependent work that AI can’t perform. Those who don’t tend to end up with teams that have become passive operators of automated systems.
His advisory engagements, which have included work with technology firms including InfoBeans, reflect this organizational philosophy consistently. The recurring emphasis is on intentional role design in environments where AI is actively changing the composition of most jobs.
AI Ethics and the Governance Questions That Belong to Phaneesh Murthy’s Management Layer
Ethical oversight of AI has a structural tendency to travel downward in organizations. Technical teams review model design and data handling. Compliance functions address regulatory exposure. Management deploys the outputs. The accountability for how those outputs are used in decisions affecting real people tends to sit nowhere in particular.
Murthy has described this arrangement’s consequences with precision: “Technology scales intent. If your intent lacks responsibility, the scale will magnify that flaw.”
The governance questions that prevent AI-related harm are the ones management needs to own. What population was this model trained on? What biases might that data carry? How are the decisions this model influences being explained to those they affect? Who is accountable when the model is wrong? These questions require organizational judgment. They belong at the management layer because that’s where the relevant decisions about deployment and use are actually being made.
Building Phaneesh Murthy-Style AI Literacy Across Management Tiers
Murthy’s position is that AI fluency comes from engagement rather than from certification programs. Managers who use AI tools directly develop calibrated skepticism about outputs. Those who have real conversations with technical colleagues about model assumptions gain the vocabulary to ask better governance questions. Those who engage with research on AI ethics and governance build the context to evaluate emerging risks before they materialize.
The same alignment between stated values and operational practice has characterized his three-decade career, from growing Infosys’s revenues from roughly $2M to $700M in global delivery to his current advisory engagements. Credibility, in his view, depends on what leaders actually grasp about the systems driving their organizations forward.
“Leadership today requires technological awareness,” Murthy has noted.
Organizations treating that statement as operational rather than aspirational are building AI fluency at the management layer as a deliberate capability. Those treating it as a slogan tend to discover the cost of the gap at an inconvenient moment.

