Editor’s Note: This is the second article in a seven-part GrowthBits series on AI in HR, exploring how leaders can preserve human judgment and embed AI responsibly as work evolves.

Name the four capacities and you can build for them. Leave them vague and you build a rubber-stamp review process and call it human oversight.

What is Human Intelligence?

The current discourse about AI for HR talks about ‘the human element’ constantly. Human touch. Empathy. Judgment. But what do we actually mean? To protect the “human-in-the-loop” in the age of AI, we must know what we are protecting, specifically.

There are four capacities that constitute human judgment at its highest level. Each has research behind it, and each is trainable. There is a growing recognition of their importance in the age of AI, and companies that embed them into leadership development and AI deployment strategies will have an edge.

Four Capacities of Human Judgment

CapacityWhat it is and Why it Matters
Analogical ThinkingImporting structure from one domain to solve a problem in another. The mechanism behind many historical intellectual breakthroughs and the move AI cannot make across distant domains.

Darwin drew on Malthus. Germ theory drew on fermentation. Early computer architecture drew on mathematical proof. AI is superb at pattern recognition within a domain but it cannot spot deep structural equivalence across contexts with no surface resemblance.
Theory of MindModeling other people’s internal states accurately enough to predict how they will experience a situation. The foundation of every high-stakes HR interaction.

In its absence it looks like this: the manager baffled by why clear feedback landed badly; the HR leader who cannot understand why a policy that seems obviously fair is experienced as deeply unfair by a specific group; the recruiter who keeps hiring people who look great on paper and then fail on the team. Research by Kidd and Castano (2013)1 demonstrated that reading literary fiction—specifically fiction with psychologically complex characters—measurably improves theory of mind. The implication is underappreciated: theory of mind is trainable.
Ambiguity ToleranceContinuing to think carefully when there is no clean answer, and resisting the false certainty that confident-looking AI outputs can produce.

This is the capacity AI is systematically eroding. ML systems are, at their core, ambiguity-reduction machines: they take uncertain inputs and produce confident outputs. AI-generated performance summaries, candidate assessments, and attrition predictions present themselves with an authority their actual epistemic status does not warrant. The practitioner who asks, “what is this missing, what assumption is baked in, what is the actual confidence level?”—that person is exercising ambiguity tolerance.
Ethical ReasoningNavigating situations where legitimate values conflict and no option is unambiguously right.

Ethical reasoning is thinking carefully about what to do when values conflict—fairness versus efficiency, transparency versus privacy, individual versus organizational interest. AI can perform ethical analysis, but it cannot be accountable for it. AI cannot currently hold the weight of the decision, bear its consequences, or experience the cost of getting it wrong. Ethical reasoning in HR is not primarily a cognitive task, it is an act of bearing responsibility, and that requires a person.

Once these four capacities are clearly named, the next question becomes operational: where should AI handle the work, where should it support human judgment and where should it stay out of the driver’s seat altogether?

 

Up Next: Automate, Augment, Anchor: A Framework for Where AI Belongs in HR

 

References
  1. Kidd, David Comer, and Emanuele Castano, “Reading Literary Fiction Improves Theory of Mind,” 2013, Science, https://www.science.org/doi/10.1126/science.1239918