Skip to main content
AI Act — Public Registry

Algorithm Registry

Every automated decision system used on InTransparency is documented here. Students have the right to understand how any match or prediction about them was produced, to request human review, and to contest the outcome.

Classification: Under EU AI Act Regulation 2024/1689, Annex III §4, systems that evaluate candidates for employment are classified as high-risk. InTransparency's matching systems implement every required safeguard: transparency, human oversight, traceability, data governance, and the right to explanation.

Talent Match

v1.2.0
Rule-based scoring
Last audit: 2026-03-15

Purpose

Rank students for a given role based on verified skills, projects, stages, and academic performance.

Audience: Recruiters searching for candidates; subjects have right-to-explanation access.

Inputs used

  • Required skills — from Job posting
  • Preferred skills — from Job posting
  • Student skills (self-declared) — from Student profile
  • Verified projects — from Project + ProfessorEndorsement
  • Stage supervisor ratings — from StageExperience
  • GPA (only if student opted-in public) — from Student profile
    Opt-in only
  • Graduation year — from Student profile
  • Location — from Student profile

Never used

  • Gender
  • Nationality
  • Ethnicity
  • Religion
  • Age (beyond graduation year cohort)
  • Photo / any biometric inference
  • Private GPA (if student did not opt in)

Scoring weights

Required skills match
max 40 pts
Preferred skills match
max 15 pts
Verified projects
max 20 pts
Internship experience
max 15 pts
Academic performance (opt-in only)
max 10 pts

Human oversight

Every match can be reviewed by a university administrator who can flag, confirm, or override the decision. Reviews are persisted and reportable.

Your rights as a subject

  • Right to see the explanation for any match concerning you (/matches/[id]/why)
  • Right to request human review
  • Right to object to being listed in match results
  • Right to export all explanations concerning you

Bias testing

Monthly cohort parity tests: match-score distributions checked across gender (when self-declared), universities, and degree types. Differences >5% trigger review.

Compliance references

EU AI Act Reg. 2024/1689, Annex III §4
GDPR Art. 22
EU AI Act Art. 86 (right to explanation)

Placement Prediction

v0.9.0 (preview)
Hybrid scoring
Last audit: 2026-02-20

Purpose

Estimate a student's probability of securing a job offer within 6 months post-graduation.

Audience: Student and their university career service only. Never shown to recruiters.

Inputs used

  • Verified project count — from Project
  • Stage completions — from StageExperience
  • Supervisor would-hire signal — from StageExperience
  • GPA (opt-in) — from Student profile
    Opt-in only
  • Skill graph depth — from SkillDelta

Never used

  • Gender
  • Nationality
  • Ethnicity
  • Religion
  • Family background
  • Socio-economic data

Scoring weights

Stage outcomes
max 35 pts
Verified projects
max 25 pts
Skill graph breadth
max 20 pts
Academic record (opt-in)
max 20 pts

Human oversight

Predictions are advisory, never determinative. Career services can contextualize or suppress any prediction the student finds unhelpful.

Your rights as a subject

  • Right to view your own prediction
  • Right to request suppression from your dashboard
  • Right to have the prediction excluded from shared profiles

Bias testing

Counterfactual tests run on each model revision: does prediction change when protected attributes are swapped? Target drift <2%.

Compliance references

EU AI Act Reg. 2024/1689, Annex III §4
GDPR Art. 22

Questions or concerns?

Write to . We respond within 14 days to any explanation request, human-review request, or rights exercise.

Data Protection Officer contact available on request. DPIA documentation shared with regulators on request.

    InTransparency — Verified Student Profiles | University-to-Work Platform