- NYC Local Law 144-21
- Recalling The Dark Side of Traditional Recruiting Business
- Zero Sum Game: Bots vs. Bots
- Social Engineering Caused By AI-Driven Recruiting
- Getting Past HR Recruiter & AI Screening Aid
- Faking Expertise In Virtual Interviews Is a Piece Of Cake
- Generational Gap And Turf Protection – Decoded
- Bald and Gray-Haired People – To The Rescue
- Conclusion
NYC Local Law 144-21
New York City’s Local Law 144-21 is one of the first real attempts to put guardrails around AI in hiring. It purpose is to create some normalcy around Automated Employment Decision Tools (AEDTs)—the algorithms, models, and scoring systems that increasingly shape who gets interviewed, promoted, or filtered out for an interview. The law pushes for transparency and fairness by requiring organizations to run independent bias audits of their own tools and systems, publish the results, and notify people about evaluations and findings of omissions and misappropriations.
That said, it’s important to be honest about what this undertaking does and does not solve. Even if the law is well intentioned, it’s still a partial fix. It is also important to note that companies that hire for NYC can still trick the system, by using non-NYC based hiring centers that use AI HR processing, to hire for NYC.
But lets just assume that this great, yet local to NYC effort, by using bias audits and public disclosures can reasonably reduce some hidden discrimination. It still doesn’t address the deeper shift happening in the hiring market: the entire system is being reshaped by automation and reduction of human participation. So if we stop at “compliance,” we miss the bigger picture.
In this post, let’s look at hiring/recruitment from a wider angle—starting with one traditional problem that has existed in recruiting long before AI, and then look at some newer challenges created by AI automation.
Note: Some examples below are given in the context of agile role definition and agile role hiring. It is recommended to closely review each respective image, by clicking on it, to better understand each respective point.
Recalling The Dark Side of Traditional Recruiting Business
Here is one very traditional problem with recruiting and hiring. Long before AI entered the picture, the recruiting business already carried deep structural flaws that rewarded manipulation over integrity. Historically, head hunters sit in the middle of a transaction where their incentives are misaligned with both sides: the company wants the best possible talent for the lowest price, and the worker wants fair pay and honest representation. To protect their margin, some recruiters “shave” pay rates aggressively, low-balling skilled workers and presenting it as market reality, even when budgets allow more. For example, chasing lower quality offshore applicants and onshore H1B visa holders – people that usually agree to work for a significantly lower pay rate, head hunters manage to keep a bigger rate-cut to themselves.
At the same time, client needs are frequently simplified or distorted when described to candidates, while candidates’ skills are polished, exaggerated, or selectively framed when presented to clients. What emerges is not a process of matching best people to specific client requests, but an ill-defined match making process, where information is bent in both directions.
The end result is predictable: strong candidates get either never called upon or walk away in frustration, whereas underqualified ones get placed. At the same time, companies feel disappointed and frustrated, and, the system quietly repeats itself because the middle layer still gets paid.
Zero Sum Game: Bots vs. Bots
Once automation enters the picture, the old problems do not just disappear or become less important —they simply get amplified. Tools that were meant to make recruiting faster and more efficient have started to reshape behavior on both sides of the market. Platforms like (jobhire.ai) now generate applications at massive scale, and overwhelm the market, whereas AI-driven application tools filter them just as quickly. It is still not clear who started this game, who provoked whom first. But, at this point, it is not even important. What this creates is not clarity, but noise: volume replaces value, and speed replaces judgment. In that environment, real people, especially, highly qualified individuals, become collateral damage. But why? Because, the best people are filtered out not because they lack ability, but because they are not as good at faking the system—constantly optimizing, mass-applying, and gaming keywords. Such behavior has always been beneath them; they were always above this. Modern hiring drifts away from deep understanding of one’s qualifications and overall situation context, and turns into a “AI bots arms race“, where rate of output (speed of process) becomes more important that a real outcome.
Social Engineering Caused By AI-Driven Recruiting
As volume and automation take over, a new weakness becomes visible: systems designed for efficiency are surprisingly easy to exploit. The same tools meant to protect and guard people can often be manipulated and misused. It is now frequently simpler to deceive or socially engineer automated recruiting pipelines, by faking a “senior recruiter’s” profile. On platforms like LinkedIn, fake “AI recruiters” routinely lure job seekers into sharing personal information (e.g. resume, contact information, employment history) through fabricated job postings. That data is then reused to create fake candidate profiles, which are used to scam employers by impersonating real applicants. Over time, hiring platforms become polluted with noise, quality drops, trust erodes, and many frustrated people just leave for other platforms that are, presumably safer.
In this distorted system, bots and scripted identities move smoothly through weakly defended pipelines, while real candidates collide with rigid filters that reject them in least objective ways. The process flips on itself: deception becomes easier than honesty. This is another example of how hiring technology is being optimized for control and efficiency, not for truth, judgment, or human understanding.
Getting Past HR Recruiter & AI Screening Aid
So, lets assume that a candidate’s profile gets picked up by the system and is presented to an HR recruiter. What is next? Then, an HR recruiter consults an AI assistant to further evaluate a candidate’s profile (buzzwords, jargon) against requirements in a job spec. This colander-like filter approach allows the least experienced, buzzword-optimized candidates through with cheers, while seasoned industry veterans—those with genuine depth, long track records, and real capability—getting stuck and discarded.
This echoes a central point raised earlier in this article: automation doesn’t eliminate old recruiting flaws, it amplifies them. When recruiters defer judgment to AI systems trained on noisy or incomplete data and lack deep domain insight themselves, the metric becomes how well a candidate can game the algorithm—not whether they are truly the best fit. What the industry ends up advancing are title-flippers and keyword experts, while high-quality professionals drown in the noise before they ever meet a human decision-maker.
Faking Expertise In Virtual Interviews Is a Piece Of Cake
Moving on….If a real candidate somehow survives the noise, the bots, the scams, and the filters—they are finally “blessed” with a meeting with a hiring manager (hopefully a real one, and not a deep fake) hiring manager. But even this human interaction is no longer immune to distortion. Hiring is drifting into a theater of performance rather than a test of real capability. With so many interviews now, being conducted online, interference of speech-to-text tools and chatbots is inevitable. This leads to instant copy-paste answers that candidates can outsource “thinking” in real time.
Many candidates slide into this kind of behavior not because they have been inherent cheaters, but because the system has forced them to fear loss. Reaching a real interview has become so rare, so fragile, that event he most ethically strong candidates cannot risk failing and being thrown back into the bottom of a massive pile of names to be processed by AI all over again. Therefore, they stop attempting to give unique and comprehensive answers. Instead, they fall back on google-search-type-of definitions that are, most likely, expected by screeners. The interview stops measuring deep understanding, true experience, and judgment, and starts measuring how well someone can operate an commoditized stack of terms, definitions and populistic jargon during a conversation. Polished answers appear instantly, but they may belong more to an algorithm than to the person thinking. In trying to modernize interviews, organizations risk losing the very thing interviews are meant to reveal: whether there is real substance behind the words.
Generational Gap And Turf Protection – Decoded
For the very, very few ones, who manage to survive the barrage of virtual screens, bots, scams, filters, and AI gatekeepers—and finally sit across from real humans—another challenge begins. Ironically, this is where the most qualified and experienced candidates often struggle the most. For example, modern “Agile” hiring is frequently shaped by a mix of generational disconnect and turf protection that has little to do with finding the best leader for the work. Instead of exploring deep experience in organizational design, product thinking, and technical coaching, panels drift toward shallow signals: fashionable buzzwords, narrow framework trivia, and cultural cues that feel more like social sorting than professional evaluation.
In this environment, real expertise becomes a liability. Candidates who think independently, challenge weak assumptions, or expose uncomfortable truths are not understood, at best, or seen as risky or dangerous whistle blowers, at worst. Meanwhile those who speak plain, rudimentary language and fit neatly into existing power structures, succeed. The system ends up rewarding conformity over competence, protecting internal empires and widening the gap between what organizations claim they want and what they actually choose.
Bald and Gray-Haired People – To The Rescue
It is important to understand that aforementioned problems do not hurt only worthy candidates—they also damage the companies themselves. When hiring systems filter out deep knowledge and reward performance, organizations stop getting the best talent and start listening to the loudest and most polished voices that come from a ‘safe heaven’ of corporate dysfunction. Senior leaders are often guided not by those with the deepest experience, but by those who can speak trendy jargon fluently and sell hot air with confidence. Over time, this leads executives down paths shaped more by fashion than by understanding.
Organizations begin to surround leadership with a noisy “advisory” layer that optimizes for optics—dashboards, slogans, velocity charts, “AI vs. Agile” debates, and promises of speed—often at the expense of hard-earned judgment. Meanwhile, experienced practitioners, the ones who have lived through multiple cycles of failure and recovery, are pushed aside as outdated or inconvenient. It is only, when complexity completely breaks through the hype, leaders are forced back to square one, relying again on the old crew—the people whose knowledge was built over decades. The problem is not innovation that the old crew unmistakably has in their arsenal. It is mistaking noise for wisdom, and performance for the ability to actually properly diagnose and treat a problem.
Conclusion
In the end, the rise of AI in HR and recruiting is neither inherently good nor inherently bad—it is a reflection of how we choose to shape and govern it. As this illustrative summary has explored, when left unchecked, automation can amplify existing flaws, erode human judgment, and reward superficial performance over genuine capability. We must remember that like dissolves like: systems that encourage low-effort optimization will attract low-quality behavior, just as thoughtful, ethical design will attract best outcomes.
That is why the use of AI in hiring should remain very controlled and tightly governed, with clear boundaries that prevent us from crossing red lines and drifting into a world, where norms, values, and ethics get watered down and trivialized. With intentional policies, careful oversight, and a commitment to preserving the human core of work, we can harness the benefits of AI without sacrificing trust, fairness, and dignity. Ultimately, the goal is not to stop progress, but to ensure that progress serves people, not just processes—and that is a future worth building.


