Editor’s Note: This is the fifth article in a seven-part GrowthBits series on AI in HR, exploring how leaders can preserve human judgment and embed AI responsibly as work evolves.

Human intuition in hiring and development is simultaneously our greatest asset and our most documented liability.

An Emerging Opportunity: Using AI to Develop Human Judgment

In Trouble with the Curve,1 Clint Eastwood plays a baseball scout going blind who still catches what the data analysts miss: a prospect with a beautiful swing who can’t hit a curveball. The film is easy to read as anti-analytics nostalgia, but read it again: the real lesson is about what each tool is built for—the data was right about the swing, the scout was right about the flaw, and neither was sufficient alone.

HR has had its own version of this problem for decades. Human intuition in hiring and development is simultaneously our greatest asset and our most documented liability. We are pattern-recognition machines, extraordinarily fast, often accurate, and biased in ways we cannot see from the inside. We have oscillated between trusting gut instinct entirely—hiring people who ‘feel right,’ which often means ‘look like us’—and chasing the illusion that enough data will eventually make human judgment unnecessary.

And yet, we all know the experienced HR leader who senses something is off in an exit interview before the employee says the real thing, or the manager who knows that this candidate is going to be extraordinary even though the resume is unconventional. That intuition is real.

What if you no longer had to choose? The dichotomy between intuition and analytics is a false one; the most effective HR leaders are those who have learned to use each to interrogate the other.

Every conversation about AI and human judgment goes one direction: how do we train AI on human data. Nobody is asking the reciprocal question: can AI-validated data be used to train human judgment back?

AI allows us to do something new: run our instincts against data rapidly and at scale, over time, and find out where our pattern recognition is calibrated and where it drifts. This raises the question: can intuition be trained?

Research on expert intuition is precise on this point: reliable intuition develops only in domains where feedback is rapid and accurate (Kahneman & Klein, 2009).2 Surgery. Chess. Firefighting. HR has historically been the opposite—a domain where feedback is slow, ambiguous, and often absent entirely. The practitioner who made a hiring decision three years must actively seek out clear information about whether it was a good one. AI changes this feedback architecture, and in so doing, changes what is possible.

What this “Reverse Loop” Looks Like in Practice

You have spent years developing a felt sense for talent, team dynamics, and performance trajectories. You are often right, but you have never been able to see your own error rate or understand why your intuition fires when it does. Consider what becomes possible when an HR leader can see, over years of decisions, exactly where their instincts were right and where they drifted: a running account of the relationship between felt confidence and eventual accuracy.

A system that tracks hiring decisions against eventual performance outcomes, performance assessments against subsequent trajectory, and engagement signals against eventual attrition—and makes this information available to the practitioners who made the original judgments—would give HR professionals accurate feedback on the quality of their own professional intuition over time.

The manager who discovers their first-interview gut reactions correlate strongly with eventual performance learns something important. The manager who discovers their enthusiasm for a particular type of candidate consistently fails to predict success learns something equally important in the opposite direction. AI makes both possible.

This could offer infrastructure for expertise development in a domain where expertise has historically been impossible to build deliberately. Wiser judgment, developed over time. A note on where this stands: the reverse loop as described here is a proposed framework. Some organizations are in the process of building it—tracking hiring decisions against performance outcomes, using engagement data to test predictive models against attrition. Technology now makes this possible. Even for smaller organizations, disciplined habits, supported by accessible technology, can create meaningful insights. The teams building it (no matter their scale) are developing an advantage that their competitors will struggle to replicate.

The Honest Caution

The reverse loop may reveal uncomfortable things. Human intuition in hiring and development is simultaneously an asset and a liability. The research is unambiguous: unstructured interviews have notoriously low predictive validity for job performance (Schmidt & Hunter,1998).3 ‘Culture fit’ is frequently a euphemism for affinity bias—documented most rigorously in elite professional services contexts (Rivera, 2012).4 Organizations that want to build this feedback architecture need to create psychological safety around receiving what it reveals. The goal is development of future judgment, not accountability for past errors.

The reverse loop is the infrastructure for building the four capacities that AI cannot build for you.

STEAM over STEM

As AI absorbs pattern-recognition work across professional domains, the comparative advantage of human practitioners shifts toward the capacities that AI cannot replicate: analogical thinking, theory of mind, ambiguity tolerance, ethical reasoning. These are the capacities that engagement with art, literature, music, history, and philosophy develops. This is a window into where scarcity is moving, and HR teams must build it into its assessment, hiring, and development strategies.

The Leonardo Principle

Leonardo da Vinci kept anatomical notebooks and painted portraits of extraordinary psychological depth. He did not experience these as separate activities. Einstein played violin and credited musical thinking with influencing his physics. Feynman drew and played percussion. Barbara McClintock, Nobel laureate for her work on genetic transposition, described her method as ‘developing a feeling for the organism’ (Root-Berstein & Root Bernstein, 2004).5

These are not anecdotes about well-rounded people. This is evidence that the creative faculty—the willingness to see familiar things as strange, to make unexpected connections—is not domain-specific. And it develops through engagement with art in ways that technical training alone does not replicate. The HR leader of the AI era does not need to be a data scientist, although baseline technological fluency is table stakes. They need the formation that makes them capable of doing the anchor work with excellence: the difficult conversations, the ethical navigation, the empathic modeling of what organizational decisions mean for the people inside them. That formation looks more like a serious liberal arts education than current HR professional development suggests.

Frederick’s Bet

Leo Lionni’s Frederick6 is a children’s book about a mouse who doesn’t gather food for the winter. While the other mice collect corn and nuts, Frederick sits in the sun, gathering colors, warmth and words. When the food runs out in February, Frederick is the one who keeps the community alive by relying on skills that don’t register as productive until they are suddenly essential.

The HR leader who greenlights a leadership development program built around literary discussion, or who hires for philosophical depth alongside technical skill, is making Frederick’s bet. The ROI is invisible until it isn’t. You will see the benefits in the difficult conversation handled with genuine wisdom. The ethical call made clearly under pressure, the team that holds together when the situation has no clean answer. That is what we must build for.

STEAM is the argument for why the four capacities need to be present at the design stage.

The Design-Stage Argument

AI systems embed values whether their builders are conscious of it or not. What data to train on, what outcome to optimize for, what counts as an acceptable error rate—these are value choices made in technical language. A team without humanistic formation makes them unconsciously, defaulting to what is measurable and what worked last time. The result is systems that are technically sophisticated and humanistically naive.

The solution is to have humanistic thinking present at the design stage—when the optimization target is being chosen, when training data is being selected, when error rate tradeoffs are being made. For HR leaders deploying AI in people processes, this means the team making those decisions needs people with enough humanistic formation to ask what this model is not seeing before it is deployed.

Note that this is a distinct argument from the scarcity argument above. The scarcity argument says: as AI absorbs pattern-recognition work, humanistic capabilities become the comparative advantage, so hire and develop for them. The design-stage argument says: someone with humanistic formation needs to be in the room when AI systems are being built, because value choices are being made, whether or not anyone names them. Both are true, and both have implications for where you invest.

 

Up Next: Eight Steps for Turning AI interest Into HR Capability

 

References
  1. Trouble with the Curve, directed by Robert Lorenz, Warner Bros., 2012, https://www.warnerbros.com/movies/trouble-curve

  2. Kahneman, Daniel, and Gary Klein, “Conditions for Intuitive Expertise: A Failure to Disagree,” American Psychologist, 2009, https://europepmc.org/abstract/MED/19739881

  3. Schmidt, Frank L., and John E. Hunter, “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings,” Psychological Bulletin, 1998, https://books.google.com/books/about/The_Validity_and_Utility_of_Selection_Me.html?id=f4o2NQAACAAJ

  4. Rivera, Lauren A., “Hiring as Cultural Matching: The Case of Elite Professional Service Firms,” American Sociological Review, 2012, https://www.kellogg.northwestern.edu/academics-research/research/detail/2012/hiring-as-cultural-matching-the-case-of-elite-professional/

  5. Root-Bernstein, Robert, and Michele Root-Bernstein, “Artistic Scientists and Scientific Artists: The Link Between Polymathy and Creativity,” in Robert Sternberg, Elena Grigorenko, and Jerome Singer, eds., Creativity: From Potential to Realization, APA, 2004, https://gwern.net/doc/psychology/energy/2004-rootbernstein.pdf

  6. Lionni, Leo, Frederick, Pantheon Books, 1967, https://www.penguinrandomhouse.com/books/101916/frederick-by-leo-lionni/