Algorithm Registry
Every automated decision system used on InTransparency is documented here. Students have the right to understand how any match or prediction about them was produced, to request human review, and to contest the outcome.
Classification: Under EU AI Act Regulation 2024/1689, Annex III §4, systems that evaluate candidates for employment are classified as high-risk. InTransparency's matching systems implement every required safeguard: transparency, human oversight, traceability, data governance, and the right to explanation.
Talent Match
Purpose
Rank students for a given role based on verified skills, projects, stages, and academic performance.
Audience: Recruiters searching for candidates; subjects have right-to-explanation access.
Inputs used
- •Required skills — from Job posting
- •Preferred skills — from Job posting
- •Student skills (self-declared) — from Student profile
- •Verified projects — from Project + ProfessorEndorsement
- •Stage supervisor ratings — from StageExperience
- •GPA (only if student opted-in public) — from Student profileOpt-in only
- •Graduation year — from Student profile
- •Location — from Student profile
Never used
- ✗Gender
- ✗Nationality
- ✗Ethnicity
- ✗Religion
- ✗Age (beyond graduation year cohort)
- ✗Photo / any biometric inference
- ✗Private GPA (if student did not opt in)
Scoring weights
Human oversight
Every match can be reviewed by a university administrator who can flag, confirm, or override the decision. Reviews are persisted and reportable.
Your rights as a subject
- →Right to see the explanation for any match concerning you (/matches/[id]/why)
- →Right to request human review
- →Right to object to being listed in match results
- →Right to export all explanations concerning you
Bias testing
Monthly cohort parity tests: match-score distributions checked across gender (when self-declared), universities, and degree types. Differences >5% trigger review.
Compliance references
Placement Prediction
Purpose
Estimate a student's probability of securing a job offer within 6 months post-graduation.
Audience: Student and their university career service only. Never shown to recruiters.
Inputs used
- •Verified project count — from Project
- •Stage completions — from StageExperience
- •Supervisor would-hire signal — from StageExperience
- •GPA (opt-in) — from Student profileOpt-in only
- •Skill graph depth — from SkillDelta
Never used
- ✗Gender
- ✗Nationality
- ✗Ethnicity
- ✗Religion
- ✗Family background
- ✗Socio-economic data
Scoring weights
Human oversight
Predictions are advisory, never determinative. Career services can contextualize or suppress any prediction the student finds unhelpful.
Your rights as a subject
- →Right to view your own prediction
- →Right to request suppression from your dashboard
- →Right to have the prediction excluded from shared profiles
Bias testing
Counterfactual tests run on each model revision: does prediction change when protected attributes are swapped? Target drift <2%.
Compliance references
Questions or concerns?
Write to . We respond within 14 days to any explanation request, human-review request, or rights exercise.
Data Protection Officer contact available on request. DPIA documentation shared with regulators on request.