All posts
Hiring7 min read

How to use AI in your hiring process without breaking EU law

The EU AI Act is here. If you use AI to screen candidates, score applications, or automate decisions, here is what you need to do to stay compliant.

99% of recruiting teams use AI. Most don't know the rules just changed.

Almost every talent acquisition team in Europe now uses some form of AI — resume screening, candidate scoring, chatbots, automated outreach. The tools have gotten cheap and easy to implement. The legal framework just caught up.

The EU AI Act classifies AI systems used in recruitment and employment decisions as "high-risk." That's not a suggestion. It's a legal category with specific obligations. If you're using AI in hiring, you need to understand what this means before a regulator explains it to you.

What the EU AI Act says about hiring

Recruitment AI is "high-risk"

Under the Act, AI systems used for recruiting, screening candidates, evaluating applications, or making employment decisions are classified as high-risk. This applies whether you built the AI yourself or bought it from a vendor.

High-risk doesn't mean "banned." It means you have to meet specific requirements: transparency, human oversight, data quality, documentation, and bias monitoring. Ignore these and you face fines of up to 3% of global revenue.

What you need to do

Transparency: Candidates must be informed that AI is being used in the hiring process. Not buried in terms and conditions — clearly communicated. If an AI system screened out their application, they have a right to know.

Human oversight: AI can't make final hiring decisions alone. A human must review and be able to override automated recommendations. If your ATS automatically rejects candidates based on AI scoring, that process needs a human checkpoint.

Bias audits: You need to regularly test your AI systems for bias across protected characteristics — gender, ethnicity, age, disability. Document the results. If bias is found, you need to fix it and document the fix.

Data quality: The data you feed into AI systems must be relevant, representative, and up to date. Training a model on historical hiring data? If your past hiring was biased (it probably was), your AI will replicate that bias.

Where startups get this wrong

"Our vendor handles compliance"

No, they don't. Under the EU AI Act, the "deployer" — that's you, the company using the AI — has compliance obligations regardless of who built the tool. Your vendor may have their own obligations as a provider, but you can't outsource your legal responsibility.

Ask your vendor: How does your system make decisions? What data was it trained on? Has it been tested for bias? Can candidates be told why they were rejected? If your vendor can't answer these questions, that's a problem.

"We only use it for screening, not decisions"

Screening is a decision. When an AI system filters out 80% of applications before a human sees them, those filtered candidates have been subjected to an automated decision. The fact that a human makes the final hire doesn't matter if the AI decided who the human gets to see.

"It's just matching keywords"

Even simple keyword matching in an ATS can create discriminatory outcomes. If you require "native English speaker" and the AI filters on that phrase, you're potentially discriminating by national origin. If the system deprioritizes candidates with employment gaps, it may discriminate against women who took parental leave.

Simplicity doesn't equal compliance. The question is whether the system's outputs create unfair disparate impact, not how sophisticated the technology is.

A practical compliance checklist

1. Audit your current tools. List every AI or automated system in your hiring process. ATS resume parsing, chatbots, assessment platforms, sourcing tools. For each one, understand what decisions it makes or influences.

2. Inform candidates. Add clear language to your application process: "We use AI-assisted tools to help process applications. A human reviews all shortlisted candidates and makes final decisions." Be specific about what AI does.

3. Ensure human oversight. For every automated decision point, define who reviews the AI's output and how they can override it. Document this process.

4. Run a bias audit. Analyze your AI-assisted hiring outcomes by gender, age, and nationality at minimum. Are certain groups being filtered out at disproportionate rates? If yes, investigate and fix.

5. Document everything. The EU AI Act requires documentation of your AI systems, their purpose, their risks, and your mitigation measures. If an audit happens, "we didn't think it was a big deal" is not a defense.

6. Train your team. Make sure recruiters and hiring managers understand which tools use AI and what the rules are. The best compliance policy is useless if the people using the tools don't know about it.

AI in hiring is worth it — done right

This isn't an argument against using AI in recruitment. AI can reduce time-to-hire, remove some human biases (while introducing others), and help small teams process more candidates efficiently.

But doing it right matters. Companies that get ahead of compliance will have a competitive advantage — candidates increasingly care about fair hiring practices, and "we use AI responsibly with human oversight" is becoming a selling point, not just a legal requirement.

Use AI. Just know the rules. And when in doubt, ask a human.

Need help hiring or growing?

We help European startups find great people and grow faster.