
11 Aug 2025
As regulation increases with the EU Pay Transparency Directive as an example, organizations are being pushed to examine the structures behind pay and recruitment more critically than ever. This isn’t just a bureaucratic obligation. It’s about fairness, transparency, and building trust. At the core of this shift lies the job description. Far more than a list of duties, it’s a structural element that directly influences pay levels, career paths, and hiring decisions.
Why Job Descriptions Matter
Job descriptions reflect an organization’s values, power dynamics, and often unquestioned assumptions. Outdated expectations or hidden biases can easily creep in, even with the best intentions. For instance, a requirement for “strong presentation skills” or “flexibility” might unintentionally exclude neurodiverse individuals. These biases don’t originate in recruitment ads alone; they’re often included into internal definitions and evaluation systems long before a job is posted.
AI: The opportunities and risks
AI and language models open powerful new opportunities to analyze unstructured data like job descriptions and compensation documents. They can help identify patterns, flag inconsistencies, and even suggest improvements. But AI is not neutral, it learns from humans, including human bias.
If past data shows a preference for a particular gender, age, or personality profile in certain roles, AI will pick up on and reinforce that pattern. A widely cited example is Amazon’s AI hiring tool, which began discriminating against women based on biased training data. It didn’t “intend” to be unfair, it simply mirrored the data it was fed. (Source: BBC News)
AI Isn’t Stupid, It’s bias
AI doesn’t understand ethics, fairness, or societal context unless we teach it to. It can strengthen biased assumptions if we fail to critically evaluate the data we feed it. That’s why human oversight is essential. AI can support our thinking, but it cannot replace ethical judgment.
What then makes a Job Description Ethically Sustainable?
Here are some points we believe are good to keep in mind:
Objective Job Evaluation
Criteria should be based on the requirements of the role, not on assumptions about the type of person who has done it before. Anyone with the right qualifications should be able to succeed.
Clear and Detailed Language
Vague terms leave room for interpretation and bias. The more concrete and understandable the description, the fairer the process.
Bias Awareness
Language, experience requirements, and salary structures must be critically assessed. For example, requiring “native-level language skills” may unnecessarily exclude capable candidates.
Human & AI Collaboration
AI can help surface hidden patterns, but people must analyze and contextualize the results. Automation alone isn’t enough.
Transparency and Iteration
Job descriptions aren’t static documents. They should be reviewed and updated regularly, especially when roles or organizations evolve.
An ethically sustainable job description is foundational to a fairer work environment and practices. It doesn’t happen by accident, it takes deliberate effort, data-informed thinking, and a willingness to do better. AI can be a powerful ally in this work, but only if we guide it with ethical clarity. We are responsible for ensuring that AI supports us humans, not the other way around by reinforcing our past inequities and prejudices.
Written in collaboration with human rights expert and Fairness & Friends advisory board member Baharak Bashmani.
