AI Job Risk Index AI Job Risk Index

Methodology

Last updated: March 31, 2026

The AI Job Risk Index score is not a literal forecast that a profession will disappear. It is a comparative editorial signal from 0 to 100 that helps readers compare how strongly occupations may be exposed to AI-driven change.

This page explains what inputs we look at, what dimensions we evaluate, how AI and human responsibility are separated in the workflow, and where the limits of the methodology remain.

The score is comparative, not a prediction of layoffs or unemployment.

Long-form occupation guides are managed as fixed editorial assets rather than disposable generated copy.

AI can assist parts of the workflow, but publication standards and quality responsibility remain human-owned.

1. What the score means

The score compares how exposed an occupation may be to AI-driven change based on task structure, repeatability, information-handling intensity, and recent weekly signals.

A higher score usually means a larger share of routine, standardized, or information-processing-heavy work. It does not mean the occupation name disappears overnight.

2. Inputs and update scope

The weekly process ingests signals tied to AI capability changes, product releases, business adoption, regulation, and implementation patterns in different industries.

Those signals are interpreted against the job catalog, score history, and comparison layers used on job, industry, country, and report pages.

  • Occupation catalog: fixed job records and slugs
  • Weekly signals: AI products, enterprise adoption, regulation, and industry movement
  • Historical data: weekly scores, changes, and ranking movement
  • Aggregation layers: industry pages, country pages, and reports

3. Main evaluation dimensions

We do not score occupations on a single trait. We compare routine structure, information-processing intensity, the kind of creativity involved, the weight of human interaction, and physical or site-specific constraints.

That is why two jobs can both look knowledge-based while carrying different levels of AI exposure once accountability, ambiguity, and execution context are considered.

  • Routine structure: how repeatable and rule-based the work is
  • Information processing: how much of the role depends on drafting, summarizing, comparing, or organizing information
  • Type of creativity: whether volume generation is enough or contextual judgment still drives quality
  • Human responsibility: how much the role depends on explanation, trust, negotiation, or emotional reading
  • Physical and site constraints: how much work depends on place, equipment, safety, and live conditions

4. Update workflow

The weekly batch covers signal collection, score evaluation, historical persistence, ranking refresh, and page updates.

Stable long-form guides are reviewed under a separate quality layer so fixed pages do not drift into weak or broken states.

5. Where AI is used and where humans remain responsible

AI can assist with working drafts, signal organization, and translation support. It is useful where large amounts of text need to be turned into comparable working material quickly.

Humans remain responsible for the scoring framework, publication rules, fixed source guides, quality checks, and final correction decisions.

6. Translation and localization

Job guides are first made solid in Japanese and English, then rebuilt for additional locales under the same structure rules.

After translation, we check for missing fields, broken structure, wrong job slugs, and leftover source-language text before a locale is treated as publishable.

7. Limits and caution

This site does not predict the future with certainty. AI impact changes with regulation, adoption speed, economic conditions, company design, and local labor structure.

Readers should therefore use the score and editorial guides as comparative research material, not as substitutes for role-specific judgment.