Skip to main content

EU AI Act

The EU AI Act, formally approved by the European Parliament in March 2024, is the world’s first comprehensive regulatory framework for artificial intelligence that significantly impacts recruitment practices. Under the Act’s risk-based classification system, AI applications used in employment—including systems for recruiting, evaluating applicants, and making promotion decisions—are categorized as “high-risk.” This designation requires recruitment AI tools to comply with strict standards for quality, transparency, human oversight, and safety. Organizations using AI-powered recruitment technologies such as applicant tracking systems that automatically rank candidates, video interview platforms analyzing candidate responses, or chatbots handling initial screenings must ensure these systems are designed to avoid bias and discrimination, maintain transparency about AI usage, and preserve human judgment in decision-making.

What does Palantrix do to comply

To enhance fairness in AI video interviews for recruitment, the following techniques incorporate advanced statistical and algorithmic methods, such as Iterative Predictor Removal (IPR) to progressively eliminate biased variables and Multipenalty Optimization (MPO) to balance accuracy with fairness metrics in model training. These are grounded in scientific principles from machine learning fairness literature (e.g., disparate impact theory and constrained optimization), ensuring compliance with the EU AI Act’s risk-based approach.

Iterative Predictor Removal (IPR) for Feature Selection

Systematically remove or downweight features (e.g., voice pitch) that correlate with protected attributes (gender, race) through iterative testing. Start with a full model, measure bias using metrics like demographic parity, and remove predictors one by one until bias falls below a threshold (e.g., 0.8 ratio). 

Multipenalty Optimization (MPO) in Model Training

Optimise AI models with multiple penalties in the loss function, penalising both accuracy errors and bias (e.g., add terms for equalized odds or group fairness). Use gradient descent to minimise a combined loss: L = L_accuracy + λ1 _L_bias + λ2 _ L_fairness, tuning λ via grid search. 

Adversarial Debiasing with Human Oversight

Train AI with an adversary network that tries to predict protected attributes from model outputs, forcing the main model to produce bias-free predictions. Combine with human review loops for 10-20% of outputs to validate. 

Diverse Dataset Augmentation and Synthetic Data Generation

Augment training data with synthetic videos (e.g., via GANs or SMOTE for imbalanced classes) to include underrepresented demographics, then apply MPO to fine-tune. Validate with fairness metrics like AUC across subgroups. 

Post-Hoc Fairness Calibration with Continuous Auditing

After inference, calibrate scores using techniques like Platt scaling to equalize error rates across groups, followed by automated audits (e.g., quarterly bias checks with A/B testing). Incorporate feedback loops for model retraining. 
These techniques provide a rigorous, scientific framework for bias-free AI video interviews, emphasizing predictability and equity in recruitment. IPR & MPO form part of the Palantrix continuous assessment model and based on increased data evaluations we will be expanding to the other techniques. Palantrix will be applying an iterative approach to continuously evaluating our foundation models. Human oversight is critical in this evaluation and we will continue to monitor them and in future publish our audited findings to ensure transparency.

Regular Human Reviews

Palantrix will conduct montly reviews of the evaluations, reviewing the AI performance against a number of indicators. For example cross referenceing answers across genders to see if any disparaity exists based on similar type answers. We will also be utilising the RAG (Retrieval-Augmented Generation) framework. RAG is an AI framework that combines information retrieval with generative AI to produce more accurate, up-to-date, and contextually relevant responses. All these reviews will be human led with human oversight.