Limited Attempts To Solve AI-Related Problems Of Hiring
New York City’s Local Law 144-21 is one of the first real attempts to put guardrails around AI in hiring. It targets Automated Employment Decision Tools (AEDTs)—the algorithms, models, and scoring systems that increasingly shape who gets interviewed, promoted, or filtered out. The law pushes for transparency and fairness by requiring organizations to run independent bias audits, publish the results, and notify people when these tools are used in an evaluation.
That said, it’s important to be honest about what this does and does not solve. Even if the law is well intentioned, it’s still a partial fix, and it is local to NYC. Even companies that hire for NYC can trick the system, by using non-NYC based hiring centers to use AI HR processing. But lets just assume that this great, yet local to NYC effort, by using using, bias audits and public disclosures can reduce some hidden discrimination. It still doesn’t address the deeper shift happening in the hiring market: the entire system is being reshaped by automation and reduction of human participation. So if we stop at “compliance,” we miss the bigger story.
In this post, let’s look at hiring/recruitment from a wider angle—starting with one traditional problem that have existed in recruiting long before AI, and then looking at some newer challenges created by automation.
Recalling The Dark Side of Traditional Recruiting Business
Here is one very traditional problem with recruiting and hiring. Long before AI entered the picture, the recruiting business already carried deep structural flaws that rewarded manipulation over integrity. Head hunters often sit in the middle of a transaction where their incentives are misaligned with both sides: the company wants the best possible talent for the lowest price, and the worker wants fair pay and honest representation. To protect their margin, some recruiters “shave” pay rates aggressively, low-balling skilled workers and presenting it as market reality, even when budgets allow more. At the same time, client needs are frequently simplified or distorted when described to candidates, while candidates’ skills are polished, exaggerated, or selectively framed when presented to clients. What emerges is not a process of matching people to work, but a trading floor where information is bent in both directions.
The end result is predictable: strong candidates walk away, underqualified ones get placed, companies feel disappointed, and workers feel used—yet the system quietly repeats itself because the middle layer still gets paid.
Zero Sum Game: Bots vs. Bots
Once automation enters the picture, the old problems do not disappear—they simply get amplified. Tools that were meant to make recruiting faster and more efficient have started to reshape behavior on both sides of the market. Machines now generate applications at massive scale, and other machines race to filter them just as quickly. It is still not clear who started this game, who provoked whom. What this creates is not clarity, but noise: volume replaces value, and speed replaces judgment. In that environment, real people, especially, highly qualified individuals, become collateral damage. The best people are filtered out not because they lack ability, but because they are not as good at faking the system—constantly optimizing, mass-applying, and gaming keywords. This was always beneath them. They were always above this. Hiring drifts away from understanding, context, and potential, and turns into a “technical arms race“, where scale matters more than substance.
Social Engineering Caused By AI-Driven Recruiting
As volume and automation take over, a new weakness becomes visible: systems designed for efficiency are surprisingly easy to exploit. The same tools meant to protect and guard people can often be manipulated more easily than they can be navigated honestly. It is now frequently simpler to deceive or socially engineer automated recruiting pipelines than to pass through them as a genuinely qualified person. On platforms like LinkedIn, fake “AI recruiters” routinely lure job seekers into sharing personal information through fabricated job postings. That data is then reused to create fake candidate profiles, which are used to scam employers by impersonating real applicants. Over time, hiring platforms become polluted with noise, quality drops, trust erodes, and many frustrated people just leave for other platforms that are, presumably safer.
In this distorted system, bots and scripted identities move smoothly through weakly defended pipelines, while real candidates collide with rigid filters that reject them in least objective ways. The process flips on itself: deception becomes easier than honesty. This reveals a deeper problem—hiring technology is being optimized for control and efficiency, not for truth, judgment, or human understanding.
Faking Expertise In Virtual Interviews Has Never Been Better
If a real candidate somehow survives the noise, the bots, the scams, and the filters—they are finally “blessed” with a meeting with a human hiring manager (hopefully a real one, and not a deep fake). But even this human interaction is no longer immune to distortion. Hiring is drifting into a theater of performance rather than a test of real capability. As interviews move online and become layered with speech-to-text tools, chatbots, and instant copy-paste answers, candidates can outsource thinking in real time.
Many candidates slide into this kind of behavior not because they want to cheat, but because the system has trained them to fear loss. Reaching a real interview has become so rare, so fragile, that they cannot risk failing and being thrown back into the bottom of a massive pile of names to be processed by AI all over again. The interview stops measuring understanding, experience, and judgment, and starts measuring how well someone can operate an invisible stack of terms and definitions during a conversation. Polished answers appear instantly, but they may belong more to an algorithm than to the person thinking. In trying to modernize interviews, organizations risk losing the very thing interviews are meant to reveal: whether there is real substance behind the words.
Generational Gap And Turf Protection – Decoded
For the very, very few ones, who manage to survive the barrage of virtual screens, bots, scams, filters, and AI gatekeepers—and finally sit across from real humans—another challenge begins. Ironically, this is where the most qualified and experienced candidates often struggle the most. Modern “Agile” hiring is frequently shaped by a mix of generational disconnect and turf protection that has little to do with finding the best leader for the work. Instead of exploring deep experience in organizational design, product thinking, and technical coaching, panels drift toward shallow signals: fashionable buzzwords, narrow framework trivia, and cultural cues that feel more like social sorting than professional evaluation.
In this environment, real expertise becomes a liability. Candidates who think independently, challenge weak assumptions, or expose uncomfortable truths are seen as risky or “too dangerous,” while those who fit neatly into existing power structures feel safer to hire. The system ends up rewarding conformity over competence, protecting internal empires over real outcomes, and widening the gap between what organizations claim they want and what they actually choose.
Bald and Gray-Haired People Will Be Still Needed
The aforementioned problems do not hurt only strong candidates—they also damage the companies themselves. When hiring systems filter out depth and reward performance, organizations stop getting the best talent and start listening to the loudest and most polished voices that were able to secure a spot for themselves in a ‘safe heaven’ of corporate dysfunction. Senior leaders are often guided not by those with the deepest experience, but by those who can speak trend language fluently and sell hot air with confidence. Over time, this leads executives down paths shaped more by fashion than by understanding.
Organizations begin to surround leadership with a noisy “advisory” layer that optimizes for optics—dashboards, slogans, velocity charts, “AI vs. Agile” debates, and promises of speed—often at the expense of hard-earned judgment. Meanwhile, experienced practitioners, the ones who have lived through multiple cycles of failure and recovery, are pushed aside as outdated or inconvenient. And then, when complexity breaks through the hype, leaders are forced back to square one, relying again on the old crew—the people whose knowledge was built over decades. The problem is not innovation. It is mistaking noise for wisdom, and performance for the ability to actually execute, learn, and adapt.
Conclusion
In the end, the rise of AI in HR and recruiting is neither inherently good nor inherently bad—it is a reflection of how we choose to shape and govern it. As this article has explored, when left unchecked, automation can amplify existing flaws, erode human judgment, and reward superficial performance over genuine capability. We must remember that like dissolves like: systems that encourage low-effort optimization will attract low-quality behavior, just as thoughtful, ethical design attracts better outcomes. That is why the use of AI in hiring should remain very controlled and tightly governed, with clear boundaries that prevent us from crossing red lines and drifting into a world where norms, values, and ethics are gradually depreciated. With intentional policies, careful oversight, and a commitment to preserving the human core of work, we can harness the benefits of AI without sacrificing trust, fairness, and dignity. Ultimately, the goal is not to stop progress, but to ensure that progress serves people, not just processes—and that is a future worth building.


