Upcoming changes in Adobe Learning Manager

Overview of the release

The April 2026 release of Adobe Learning Manager introduces a wide set of improvements that make learning smoother for learners, easier to manage for admins, and more flexible for instructors, including clearer navigation in the Fluidic Player with a “Next Module” label and a dedicated Exit button, support for multiple concurrent Zoom sessions so teams can run parallel virtual classes without manual setup, and better visibility for shared courses by showing the real author instead of “External Author” in peer accounts. The update also exposes learning‑object expiry dates in Learner APIs to help LXPs highlight time‑sensitive trainings, adds multi‑language support for Job Aids so one Job Aid can carry all language versions, and lets admins restrict when modules can be started by defining start/end windows useful for cohorts or timed programs.

External systems using LTI can now set the player language automatically, giving learners a consistent language experience across platforms. Several checklist upgrades also arrive, including weighted scoring, multilingual question text, and optional reviewer comments for richer feedback. ALM now captures multiple quiz attempts controlled inside SCORM content and reports each attempt cleanly in L2 reporting. Instructors can also generate QR codes themselves for instant enrollment and attendance tracking during in‑person sessions, and Captivate content plays more cleanly with a unified TOC, slide‑level completion ticks, and reliable notes exports. Overall, the release focuses on clarity, consistency, multilingual readiness, admin efficiency, and more flexible training delivery.

Fluidic Player navigation- show the name of the next module

Overview

This enhancement was already included in the November 2025 release of Adobe Learning Manager.

The “Next” action in the player indicates what will happen when clicked by displaying the name of the next module or course and by explicitly signaling when the learner is about to exit the player.

What’s new

“Next Module: {ModuleName}” label in the player

The Next icon in the Fluidic Player now shows the name of the next module in the course. For example, Next Module: Lesson 2- Getting started.

This applies wherever the learner is moving from one module to the next within the same course.

Clear exit action on the last module

When the learner is on the last module in a course, a new Exit action button appears, indicating that clicking it will close the player and return them to the course context.

Responsive behavior for mobile and PDF content

On smaller viewports (for example, ~320 px width), the Next label may be shortened or hidden, showing only the icon, to avoid overlapping with PDF controls.

For PDF modules, the player adjusts controls to a separate line, so navigation labels and PDF controls don’t interfere with each other.

Updated Admin > Branding > Player preview

The player preview in Admin > Branding now reflects the new label, e.g. Next Module: Lesson 2. This allows administrators to see the updated navigation behavior.

Key benefits

Clearer navigation for learners

Learners no longer have to guess what will happen when they select “Next.” The label clearly specifies what comes next, whether it’s a module or a course. This reduction in ambiguity helps alleviate hesitation and confusion, particularly in large customer education audiences where many learners may not be familiar with LMS interfaces.

Higher course‑completion rates

Clearly stating the next step (Next Module: {ModuleName}) and adding a distinct Exit action for the final module reduces the likelihood of learners abandoning the course or overlooking the last completion step.

More predictable user experience across devices

The updated labels align with the Next or Previous behavior and icons across desktop, tablet, and mobile. Layout constraints are respected across devices and PDF flows so that controls remain usable and accessible.

This is particularly important for headless implementations where the Fluidic Player is embedded inside a custom learning experience.

Use cases

Customer and partner education portals (headless or AEM‑integrated)

Accounts utilizing Adobe Learning Manager in a fully headless setup, directing learners from external marketing channels. These learners:

  • Often consume video content in long sequences.

  • Expect a curriculum‑style experience where the system clearly indicates the next episode/module.

In these environments, the Next Module: {ModuleName} label:

  • Reinforces the guided nature of the journey.

  • Minimizes drop‑off between modules.

Compliance and certification courses with ordered modules

In regulated or compliance‑heavy scenarios:

  • Learners must complete a strict sequence of modules.

  • Authors often disable TOC to avoid skipping.

Here, showing Next Module: {ModuleName}:

  • Confirms to learners that they are following the correct sequence.

  • Makes it less likely that they misinterpret the Next action and exit early.

Learning Paths where courses follow each other

Where Learning Paths or equivalents chain multiple courses. This is useful when building curriculum‑style sequences for large audiences.

Mobile‑first consumption

For learners primarily using phones or tablets:

  • Updated labels and responsive behavior ensure navigation remains understandable without relying on tiny close icons or hidden controls.

  • This is important for customer education, gig workers, or frontline learners who may access content in short sessions on mobile devices.

Zoom Connector - create multiple concurrent Zoom sessions

Overview

The upcoming upgrade to the Zoom Connector will significantly enhance how Adobe Learning Manager manages Virtual Instructor-Led Training (VILT). Before, users could only create one Zoom session at a time. With the new update, administrators and authors can schedule multiple Zoom sessions at the same time using standard integration.

What’s new

Support for multiple concurrent Zoom sessions via the connector

  • The Zoom Connector now allows more than one VILT session at the same date/time to be created from ALM.

  • The scheduling logic no longer enforces a “one Zoom meeting at a time” constraint at the account/connector level.

  • Administrators and authors can configure overlapping VILT sessions (for example, regional classrooms, parallel tracks, or repeated sessions for different partner groups) without workarounds.

Meetings are created using the instructor’s Zoom identity (not the Zoom super admin)

To safely support concurrent meetings, the connector has been updated so that:

  • Zoom meetings are now created using the instructor’s email address, instead of the Zoom super admin email.

  • Each instructor’s Zoom account can host its own meetings in parallel with other instructors, subject to the limits of the existing Zoom plan.

Note:

  • Only one instructor per meeting is still supported.

  • If an instructor’s email is later updated in Adobe Learning Manager, existing meetings remain associated with the original email used at creation.

No more manual Zoom URL pasting for concurrent sessions

Previously, when a second or third Zoom session had to run at the same time:

  • Authors had to manually create Zoom meetings outside ALM and then paste the Zoom join URL into the course instance configuration.

  • This was error‑prone and did not benefit from connector features like attendance tracking.

With the updated connector:

  • All sessions can be created directly from the ALM UI using the Zoom Connector, even if they overlap in time.

  • Session lifecycle (creation/cancellation) continues to be managed centrally via integration.

Key benefits

Better VILT scheduling at scale

Organizations can now:

  • Run multiple Zoom‑based virtual classrooms at the same time (for example, parallel tracks at a virtual summit, regional cohorts, or separate partner training sessions).

  • Avoid bottlenecks that previously forced admins to serialize sessions or rely on manual Zoom management.

Reduced administrator and author overhead

The enhancement eliminates:

  • Manual creation of Zoom meetings outside of Adobe Learning Manager.

  • Copy‑paste of Zoom URLs into each course instance for overlapping sessions.

  • Risk of mis‑configured links, wrong meetings being attached, or missed attendance tracking.

Administrators and authors can manage all Zoom sessions from Adobe Learning Manager, using familiar workflows.

Better alignment with Zoom provisioning and instructor roles

By tying meetings to individual instructor Zoom accounts:

  • Each instructor can operate within their own Zoom license limits.

  • Organizations can use their existing Zoom provisioning model (one account per trainer, per BU, etc.) while still integrating fully with Adobe Learning Manager.

  • It avoids the single‑point bottleneck of using a shared super‑admin Zoom user for all sessions.

Use cases

Multi‑track virtual events and summits

Customer education teams running large events (for example, product bootcamps, partner summits, or certification weeks) can:

  • Configure multiple Zoom‑based sessions in the same time slot (for different tracks or topics).

  • Manage all of them as VILT modules under Adobe Learning Manager’s courses and Learning Paths.

  • Provide learners a unified experience while the connector handles all underlying Zoom meeting creation.

Global partner and customer training

Organizations that train customers and partners across regions can:

  • Run separate Zoom sessions for EMEA, APAC, and Americas at overlapping times to match local working hours.

  • Avoid forcing a single global time slot or manual Zoom setup for additional cohorts.

Internal enablement

Internal enablement teams (sales, support, and so on) can:

  • Schedule parallel onboarding sessions or role‑based breakouts (for example, separate Zoom rooms for developers, admins, and business stakeholders) in ALM.

  • Keep all sessions within ALM’s VILT model for reporting and compliance purposes, rather than partially transitioning to unmanaged Zoom meetings.

Show original author for shared courses in peer accounts

Overview

When a course is shared through the catalog to a peer account, Adobe Learning Manager currently labels the author as “External Author” in the Learner, Administrator, and Author views of the receiving account. This can create challenges for learners and administrators, particularly in large enterprises, as it becomes difficult to identify and contact the appropriate content owner when issues or questions arise.

The enhancement ensures that author information is preserved and surfaced for shared courses in peer accounts, rather than being replaced by a generic placeholder.

What’s new

Show actual author name for shared courses in peer accounts

For courses shared via external or peer catalogs, the original author name from the source account is now displayed in the receiving account instead of “External Author”.

This applies to:

  • Learner app (course card or course details).

  • Administrator and author views when previewing as a learner.

Key benefits

Direct owner visibility for shared content

Learners and administrators in peer accounts can now:

  • See who authored the course, even when it is acquired via a shared catalog.

  • Avoid the generic and unhelpful “External Author” label.

More consistent multi‑tenant and peer‑account experience

For customers running multi‑tenant or extended‑enterprise scenarios:

  • The same course appears with consistent author branding across accounts.

  • The learner experience is aligned with expectations from the primary account (for example, seeing “Cloud Academy Team” instead of “External Author”).

Use cases

Large enterprise with peer accounts

The enterprise uses ALM with:

  • A main account that owns the canonical courses, and

  • Peer accounts that acquire content via shared catalogs.

Learners in peer accounts need to know which enterprise team authored a course to route questions or improvement suggestions correctly.

With this enhancement:

  • Shared courses now display the correct enterprise author’s name in peer accounts.

  • The enterprise’s internal support load is reduced because learners and local admins know who to contact.

Internal multi‑BU sharing

Where one business unit curates learning for others:

  • The owning BU can be identified in the author field across all consuming accounts.

  • Local L&D admins can quickly see whether a course is maintained locally or by another BU, and collaborate accordingly.

Expose Learning Object expiry (auto‑retire) date in Learner APIs

Overview

This enhancement makes the auto‑retire date of a Learning Object (LO) available directly through Adobe Learning Manager’s Learner‑facing APIs. When a course, learning path, or certification is configured with an expiry or auto‑retire date, that information is now part of the LO data returned by key Learner endpoints.

What’s new

New expiry/auto‑retire field in Learner LO APIs

  • The Learner LO APIs (for example, the endpoints that return learning objects to the learner experience and to external platforms) now include the LO expiry date (the auto‑retire date configured for that learning object).

  • This field is returned as part of the LO entity in responses such as:

    • Get Learning Object (LO details).

    • LO data used to populate learner home, catalog, and search results.

  • The field complements the existing completionDeadline that already exists at the instance level; the new field is specifically the LO‑level auto‑retire date.

Availability in search‑backed learner experiences

Because the expiry date is exposed as part of the search‑backed LO representation, it is now available anywhere ALM or an external platform uses:

  • search APIs or

  • search‑driven catalogs and suggestions to construct learner views.

Scope and exclusions

The enhancement applies to Learner APIs only.

Key benefits

Expiry‑aware learner experience in custom LXPs

For large and medium enterprises, their custom LXP can now obtain LO expiry information directly from ALM, allowing them to:

  • Show “Expiring on {date}” or “Expiring soon” labels on course cards and detail pages.

  • Communicate with urgency more clearly, so learners prioritize training that is about to retire.

This is particularly important for compliance or time‑bound product training, where learning objects are regularly refreshed, and older versions are retired.

Better guidance for learners on which trainings to take now

By exposing LO expiry, the learner experience can:

  • Highlight courses that are still valid vs. ones about to be retired.

  • Help learners avoid enrolling in trainings that will no longer be available or valid in the near future.

Consistency with existing completion deadline data

Previously, learner APIs already exposed instance‑level completionDeadline, but not the LO‑level auto‑retire date. With this change:

The following aspects of a training are available:

  • “By when must I finish this instance?” (completion deadline).

  • “Until when is this training offered?” (auto‑retire/expiry date).

Use cases

A global enterprise with strict course lifecycle management

Enterprises that regularly retire and replace courses (for example, regulatory, product, or methodology updates) can:

  • Avoid learner confusion about whether a training is being phased out.

  • Drive learners toward the most current, long‑lived offerings.

Their custom portals and internal tools can now read the expiry date directly from ALM via the Learner APIs.

External customer or partner academies

For customer and partner education, marketing pages and portals often emphasize up‑to‑date training.

Having expiry dates in the LO API lets experience builders:

  • Hide or de‑emphasize content that is close to retirement.

  • Build “Last chance to complete” campaigns.

Multi‑language support for Job Aids

Overview

The enhancement extends Adobe Learning Manager’s localization model to Job Aids, allowing authors to attach different content files per language to a single Job Aid. Instead of creating separate Job Aids for each language, authors can now manage all localized versions as one logical Job Aid.

What’s new

Language-specific content upload for Job Aids

Authors can attach different files per supported language to a single Job Aid, like courses and other LOs.

The Job Aid creation/edit experience now supports:

  • Selecting a language.

  • Uploading the language‑specific file for that language within the same Job Aid entity.

Consistent language handling in the player and learner UI

The Fluidic Player has been updated so that when a learner opens a Job Aid, the content variant corresponding to the learner’s language is displayed (where available).

Administrators and authors can view Job Aids as single objects with language variants, rather than separate items per language.

Key benefits

Single Job Aid for all languages

Authors can avoid creating separate Job Aids per language.

All language variants of the same Job Aid (for example, a procedure, SOP, checklist PDF or reference guide) can be managed in one place.

Better experience for global learners

Learners automatically see the Job Aid in their preferred language, which means there is:

  • Less confusion about which version to open.

  • Less risk of accessing out‑of‑locale or outdated copies.

This is particularly useful in multilingual organizations where the same process or product documentation must be available in multiple languages.

Use cases

Global roll‑out of reference content

An enterprise needs to provide Job Aids in several languages to learners worldwide, such as:

  • Product reference sheets.

  • Process checklists.

  • Support playbooks

Instead of creating separate Job Aids like “Product Quick Start – EN”, “Product Quick Start – DE”, “Product Quick Start – JP”, etc., they can create one Job Aid, attach localized files for each language, and let ALM serve the correct version to each learner based on language settings.

Customer or partner‑facing documentation for multiple markets

For customer and partner academies, Job Aids might include:

  • Product cheat sheets

  • Integration guides

  • Support workflows

With multi‑language Job Aids:

  • Each partner sees the localized version without being forced to choose between language‑specific entries.

  • Marketing and enablement teams can manage one Job Aid per topic across all locales.

Restrict when modules can be started

Overview

The enhancement lets authors and administrators in Adobe Learning Manager define a time window during which learners are allowed to start a module. Outside the configured start/end window, the module remains visible in the course structure, but learners cannot initiate it.

This capability is critical for users who need tighter control over when certain content becomes available or should stop being initiated, for example, in timed programs, cohort‑based training, or time‑sensitive exercises.

What’s new

Authors can now configure, at the module level within a course, a start date/time and end date/time that governs when learners are allowed to launch that module. Within this window, the module behaves as usual; before the start time or after the end time, the learner sees the module in the course outline but cannot start it.

The configuration appears in the course authoring user interface as additional scheduling controls for specific module types, such as self-paced content, quizzes, or activities. Administrators can use these controls to create modules that open in phases or to prevent late starts in programs where content must be consumed within a defined timeframe.

Key benefits

The main advantage is the ability to control when modules are accessible. Training teams can synchronize module availability with real-world events, such as new product launches, regulatory deadlines, and internal programs. This ensures that learners complete prerequisite content before they can access later modules.

For instance, cohort 1 can access module 2 only in week 2, while module 3 will remain locked until week 3, eliminating the need to manually hide and unhide content or create separate course versions.

This enhances the learner experience: instead of facing modules that can technically be accessed but shouldn’t be at that time (or should already be completed), learners see a course structure where the modules they are permitted to start are clearly aligned with the intended schedule.

Use cases

  • Cohort-based enablement program: In this program, each week unlocks a new module. The content for Week 1 is available immediately, while Week 2 is visible but cannot be started until a specified date. Week 3 follows the same gating process. Learners can see the entire learning path, but the system controls when they can actually begin each step.

  • Time‑bound product or campaign training: Marketing or product teams may create a training module that should only be accessed while a campaign is active or when a specific version of a product is still available. This designated start window ensures that learners don’t begin a module about a discontinued product version after the specified end time.

  • Assessment or exam environments: Organizations can open a module (such as a test) for a short, well‑defined window (for example, “you may start the exam anytime between 9:00 and 12:00 on a given date”). Learners cannot begin the exam outside that window, which supports fair scheduling across time zones and cohorts.

Control player language via custom LTI parameter

Overview

The enhancement allows external platforms using LTI (Learning Tools Interoperability) to specify the language for Adobe Learning Manager content at the time of launch. Instead of depending on the learner to change the language within the Fluidic Player, the LTI consumer can send a language code through a custom LTI parameter. Adobe Learning Manager will then use this code to select the appropriate language variant.

What’s new

External platforms that act as LTI consumers can now pass a custom language parameter (and related player settings) when launching ALM content. ALM reads this parameter and:

  • Sets the player language accordingly.

  • Launches the corresponding language variant of the module, when multi‑language content is configured.

This means a first‑time learner, who selects French on the external platform, will see the ALM player and module launch directly in French, without having to adjust anything inside ALM.

The enhancement also accommodates scenarios in which the external platform treats ALM as a headless content player. For example, it allows the hiding of navigation elements and the table of contents (TOC) by sending additional custom parameters to adjust certain user interface settings. These settings work in conjunction with the language parameter, enabling the external platform to provide a smooth, branded experience while still utilizing ALM for playback and tracking.

Key benefits

  • Consistent language experience across systems: When a learner selects a language in the external portal, that choice is immediately reflected in ALM. This ensures that learners don’t face any mismatch between the language of the portal and the course. As a result, they won’t have to search for a language switch within the player.

  • Language‑specific reporting: In their platform, language selection is consistent with ALM, which enhances the accuracy of their analytics and learner tracking. This alignment also supports configurations where ALM’s own language controls are intentionally disabled or hidden in the Fluidic Player for specific courses. In these cases, the external platform serves as the single source of truth for language.

Use cases

  • A significant use case involves large enterprises utilizing LTI-based integrations. Learners first enroll and select a language on the platform. They then launch ALM training sessions through LTI. With this enhancement, when a learner selects Spanish, the ALM module automatically opens in Spanish. This means that learners don’t need to adjust the language settings in ALM. Furthermore, language-based reporting remains consistent with what learners see and experience in ALM.

  • Another application is the delivery of headless course experiences within a customer or partner portal. In this setup, the portal may embed ALM content using an iframe, while all navigation and language user experience (UX) are managed outside of ALM. By utilizing custom LTI parameters, the portal can ensure the ALM player is displayed in the correct language and that any unnecessary user interface elements (such as the table of contents and navigation buttons) are hidden. This allows learners to perceive a single, cohesive application rather than a disjointed collection of tools.

  • This is beneficial for organizations that conduct large-scale training in multiple languages using another LMS or learning platform. They can standardize their use of that platform for managing learner profiles, selecting locales, and presenting catalogs. Meanwhile, ALM serves as a reliable content and tracking engine, respecting the language preferences and user interactions specified by the external system during each LTI launch.

Checklist question weightage for instructor evaluations

Overview

The enhancement introduces weighted checklists, allowing instructors and managers to evaluate learners using graded scales and total scores, rather than treating each checklist question as equal. The goal is to facilitate checklist creation by implementing weighted evaluations of questions, which allows the reflection of the relative importance of different actions or skills within a single checklist.

What’s new

Checklists will support the following types:

  1. Yes/No
    Behavior remains the same as today: each question is Yes/No and pass criteria are based on the number of “Yes” responses.

  2. Same‑weight questions

    • Questions are scored on a numeric scale (0–10 by default), where:

      • The max/min values on the scale are customizable at the checklist level.

      • The scale can now start at 0 (the previous minimum score was 1).

    • All questions share the same maximum score, so the checklist behaves as a uniform graded scale for each question.

  3. Different‑weight questions

    • Each question has its own maximum score (weight).

    • The passing criteria depend on the percentage of the total possible score that the learner achieves across the checklist (for example, “pass if the learner achieves ≥ 70% of the total available score”).

For all checklist types:

  • The Reviewer (instructor or manager) evaluates the learner according to the configured checklist type:

    • Selecting Yes/No.

    • Choose scores on the defined scale.

  • The Checklist report is updated to include, for questions with different weightage:

    • The maximum score for each question.

    • The score achieved by each learner for that question.

This allows analysis of overall performance and question-specific performance based on the intended weights.

Key benefits

  • Richer, more realistic assessments: Instructors can reflect real‑world priorities by giving more points to critical behaviors and fewer to minor ones, while still using a checklist workflow suited to observed or practical tasks.

  • Total‑score‑based pass/fail: Evaluations can be based on the overall percentage score, not just how many questions pass a threshold, aligning more closely with typical competency or grading schemes.

  • Better reporting: Updated checklist reports expose max score and achieved score per question, allowing program owners and quality teams to identify specific weak spots and refine training or evaluation guidance.

Use cases

  • Enterprise skill assessments: Engineers are assessed via practical, scenario‑based checklists where certain diagnostic or communication steps must carry more weight than cosmetic or low‑risk steps. Weighted questions and total‑score pass criteria make these assessments more credible and predictive of real‑world performance.

  • Safety and compliance observations: In healthcare, manufacturing, or field service, critical safety steps can be given higher max scores, ensuring that missing a safety‑critical action has a larger impact on the total score than missing a minor procedural step.

  • Coaching and calibration: With max and achieved scores per question in the report, managers can see exactly where learners underperform and calibrate instructors on how to score consistently.

Multi‑language support for checklist questions

Overview

The enhancement introduces multi-language support for checklist questions, allowing reviewers to evaluate and score checklists in their preferred language. This feature is particularly useful in multilingual regions and global deployments, as it allows authors to create localized checklist questions for each supported content language while maintaining a single checklist module and a consistent evaluation process.

In Adobe Learning Manager today:

  • All learner‑facing modules (SCORM, PDFs, HTML, etc.) can be provided in multiple content languages, allowing learners to choose the language they prefer.

  • In a checklist module, reviewers (instructors/managers) evaluate learners based on the questions defined within that checklist.

What’s new

Authoring

  • Authors can now add checklist questions in all languages selected at the course level.

  • For each checklist:

    • The author is expected to provide equivalent question text in every content language in which the course exists.

    • Authors are responsible for ensuring that the meaning of each question is consistent across languages.

Review experience

  • Reviewers will see checklist questions and evaluation UI in their selected content language.

  • When a question is evaluated in one language:

    • The evaluation (score, Yes/No, status) is logically the same across all languages. It’s a single checklist with multiple language views, not separate checklists per language.

Reporting

The Checklist report will display question text in the user’s content language:

  • An administrator or reviewer running the report in each language sees the localized question names for that language.

  • The underlying responses and scores remain the same; only question labels are translated.

Key benefits

  • Better reviewer experience: Reviewers can work entirely in their own language, reading questions and recording evaluations without language barriers.

  • Regulatory and policy alignment: In regions with language equality requirements (for example, Dutch/French in Belgium), checklists can now meet the same standards as other learning materials, reducing compliance risk.

  • Consistent evaluation logic: While the text is localized, evaluation and scoring are shared across all languages, ensuring that results are comparable and centrally managed.

Use cases

  • Multi‑country franchises that operate in multiple languages can deploy a single course and checklist while still providing localized reviewer experiences in each territory.

  • Any global enterprise with local instructors (for example, EMEA, LATAM, APAC) can have reviewers work in their local language while sharing the same global checklist design and reporting.

Checklist with commenting capability for reviewer

Overview

The enhancement introduces a commenting feature for checklist evaluations, allowing reviewers, such as instructors and managers, to provide qualitative feedback alongside the numeric scores. This feedback can be made visible to learners when necessary.

The goal is to support checklist-based evaluations where mentor feedback is as crucial as the numeric result. This includes highlighting specific strengths, areas for improvement, or providing context for the given score.

Today, reviewers can:

  • Evaluate a checklist for each learner, question by question.

  • View results and re‑evaluate learners who have failed.

In real-world scenarios, such as aviation, field trainers assess shop-floor agents and airport staff. Similarly, instructors and mentors in small and medium-sized enterprises (SMEs) often use checklists to evaluate job performance. However, these checklists typically do not include a structured section for capturing narrative feedback related to the evaluation.

What’s new

Authoring options

Authors can configure each checklist to:

  • Enable or disable commenting capability for reviewers.

  • Decide whether the reviewer’s name should be shown to learners along with comments.

This allows organizations to tailor comment visibility to their culture and privacy requirements.

Reviewer experience

When commenting is enabled:

  • Reviewers (instructors/managers) can add optional comments while evaluating a checklist.

  • They can choose whether comments are visible to learners, based on the checklist settings.

If they re‑evaluate a learner, they can update or change comments to reflect the latest assessment.

Reporting and notifications

  • The Checklist report gains a new column for reviewer’s remarks, capturing the comment provided during evaluation.

  • Learners receive notifications (in‑platform and email) whenever a checklist evaluation occurs. These notifications include:

    • The comment and

    • The reviewer’s name, if those were configured to be visible.

This ensures feedback is not only stored but actively surfaced to learners.

Key benefits

  • Richer, coach‑like feedback: Numeric scores are supplemented with contextual remarks, making checklists a more effective tool for coaching, not just compliance.

  • Traceability and auditability: Organizations gain a persistent record of who evaluated whom, when, and what they said, which is important in regulated environments and high‑stakes roles.

  • Better learner engagement: Learners receive clear guidance linked to specific evaluations, which improves their understanding of expectations and subsequent steps.

Use cases

  • Organizations with regulated environments can use comments to document clinical judgment or procedural feedback for staff who are being observed in the field.

  • Aviation and ground‑handling organizations can attach detailed notes on operational performance, safety practices, and customer‑facing behavior, turning a checklist into a structured debrief tool.

  • In mentoring and SME evaluation, instructors can capture nuanced observations that wouldn’t fit into a score alone, for example, “handled escalation well but needs to improve time management” or “excellent troubleshooting flow; missed a documentation step.”

Content‑level multiple attempts and quiz reporting

Overview

Presently, ALM supports multiple attempts at the LMS level via the Multiple Quiz Attempt (MQA) feature:

  • Authors can configure attempts at course level (applied to all quiz‑bearing modules in the course) or at module level (per quiz module).

  • Attempts can be:

    • A specific number (for example, 3 attempts), or

    • Infinite attempts, controlled at LMS level.

  • When a learner consumes a module through the Fluidic Player and then closes the player or completes the module, that session is treated as a single LMS attempt.

  • Each LMS attempt is captured in the L2 quiz report as a new row.

However, if the content file itself (for example, an Articulate SCORM quiz) implements its own multiple‑attempt logic, ALM’s L2 quiz report does not currently distinguish or track those internal attempts correctly.

This enhancement introduces content-level multiple attempt tracking for quizzes, allowing Adobe Learning Manager to accurately capture each attempt within the content itself in the L2 quiz report. It’s designed for situations where the content authoring tool (such as Articulate SCORM) manages quiz attempts independently. With this feature, attempts will be correctly reflected in ALM reporting without depending on LMS-level Multiple Quiz Attempt (MQA) settings.

What’s new

Author flag for content‑level attempts

  • When uploading content into the Content Library, authors can now indicate that a specific content file has multiple attempts embedded within it.

  • This is a per‑content setting that tells ALM to treat attempts defined inside the content as the source of truth.

Course/module behavior

When such content is used in a course:

  • The module will derive its attempts from the content, not from LMS MQA.

  • Learners will see one LMS‑level attempt only:

    • The course overview and module view will not expose an LMS “re‑attempt” button for that module.

    • Attempt handling (for example, re‑tries within the quiz) is governed by the content itself.

Reporting

The L2 quiz report will be updated to treat each content‑level attempt as a separate attempt row:

  • Each internal quiz attempt configured in the content appears as its own row in the L2 quiz report, like how LMS‑level attempts are represented today.

  • The format of each row remains the same as existing multi‑attempt rows in L2 reporting (same columns, structure, and semantics).

  • This gives a consistent reporting experience:

    • Whether attempts are controlled by LMS MQA or by the content, the L2 quiz report shows one row per attempt.

Key benefits

  • Accurate attempt history for SCORM quizzes where attempts are controlled internally by tools like Articulate, without forcing LMS‑level MQA configuration on top.

  • Cleaner learner experience: for content‑controlled attempts, learners see a single slot at the LMS level and don’t need to interact with LMS re‑attempt controls; all re‑tries are handled within the quiz UI they already know.

  • Flexible architecture: Users can choose whether ALM MQA or content‑level attempts should drive behavior per module, depending on how their content was authored and how they prefer to manage attempts.

  • Consistent reporting model: downstream consumers of the L2 quiz report can treat each row as “one attempt,” regardless of where the attempt logic originates.

Use cases

  • Organizations using Articulate SCORM can keep self-contained quiz logic within the SCORM package while achieving accurate attempt-level reporting in ALM without extra LMS configuration.

  • Organizations that use vendor-supplied SCORM content can avoid the need to modify or implement additional attempt and retry logic with LMS-level MQA.

Instructor QR codes for instance enrollment and session attendance

Overview

This enhancement adds the ability for instructors to generate QR codes themselves for:

  • Course instance enrollment,

  • Session attendance, or

  • Enrollment + attendance together

at the session level. It’s designed for situations where learners enter a physical or hybrid classroom and require a quick, self-service option to enroll and record their attendance using a QR code.

What’s new

Instructor‑generated QR codes

  • Instructors will be able to generate QR codes at the session level for:

    • Enroll in instance: Learners scan to enroll into the instance that includes the current session.

    • Mark session attendance: Learners scan during/after the session to record attendance for that specific session.

    • Enroll in instance + mark session attendance : A combined QR for walk‑ins who are not yet enrolled and need their attendance marked in one step.

  • Instructors can export the QR codes they need based on the scenario (enrollment, attendance, or both).

QR code packaging

The exported QR code PDF will include:

  • Course name

  • Instance name

  • Session name

These make it easy for instructors and coordinators to identify and print the correct QR code for each session.

Key benefits

  • Instructor autonomy: Instructors no longer need to wait for admins to create QR codes. They can generate them directly for each session, improving agility and reducing coordination overhead.

  • Better classroom logistics: For walk‑in or on‑site audiences (such as field workers, shop‑floor staff, or external attendees), instructors can manage enrollment and attendance on the spot using QR codes.

  • Reduced admin workload: Admin teams can focus on configuration and governance instead of handling routine QR code generation requests for every session.

Use cases

  • Organizations running large volumes of on‑site sessions (for example, product training for professionals) can empower instructors to print session‑specific QR codes that enroll and mark attendance with one scan.

  • In retail, manufacturing, and healthcare training, where learners often join sessions directly from the floor or without pre-enrollment, an “Enroll + Attendance” QR code can be placed at the door. This allows learners to self-serve their enrollment and attendance via their phones.

  • Training events for partners or customers allow the on-site trainer to easily adapt to changes in the room, additional sessions, or extra attendees without needing to consult the administrator for new QR codes.

Captivate and ALM player improvements

Overview

This enhancement improves the experience of playing Adobe Captivate content within the Adobe Learning Manager (ALM) player, particularly following the recent changes to Captivate’s architecture. The aim is to allow learners to engage with Captivate modules natively in ALM while ensuring that navigation, completion tracking, and note-taking are clear, consistent, and reliable.

What’s new

Unified TOC experience

  • Only the ALM TOC will be displayed on the left side of the player.

  • Captivate’s own TOC will be hidden when the module is played within ALM.

  • This removes duplication, ensures a single source of truth for navigation, and frees up screen real estate.

Visual completion feedback

  • The ALM TOC will show green tick marks (or equivalent visual cues) indicating slide‑level completion.

  • As learners progress through Captivate slides, the ALM TOC reflects which slides have been completed, aligning with learner expectations for modern course players.

Contextual progress controls

  • The player controls will adapt based on slide type:

    • For video slides:

      • Show a time progress bar, reflecting video playback.
  • For non‑video slides:

    • Display slide navigation controls (next/previous slide, etc.) instead of a non‑functional time bar.

      • This avoids showing irrelevant or non‑working controls on certain slide types.

Streamlined navigation

  • The separate module navigation bar (ALM) and course navigation bar will be merged into a single, intuitive bar.

  • This unified navigation:

    • Clearly distinguishes moving through the Captivate module vs. moving back to course/module level.

    • Reduces confusion caused by multiple bars with overlapping purposes.

Reliable notes linking

  • Notes will be linked to slide numbers rather than timestamps.

  • This change:

    • Fixes export failures caused by missing or incorrect timestamps.

    • Ensures notes can be exported consistently as PDFs, with a reliable mapping between notes and the slide context they belong to.

Key benefits

  • Cleaner, single‑player experience: Learners interact with one TOC and one navigation model, reducing confusion and cognitive load.

  • Accurate completion and progress indications: Slide‑level ticks and contextual controls help learners understand where they are and what’s left.

  • More robust note‑taking and exports: By tying notes to slides instead of fragile timestamps,users regain a reliable notes‑to‑PDF workflow, even with slide‑based Captivate content.

  • Preserved author workflow: Authors retain the simplicity of Captivate’s direct publish to ALM, while learners get a modern, integrated playback experience without extra authoring burdens.

Use cases

  • Enablement programs that rely on Captivate for interactive simulations can deploy content into ALM, ensuring that navigation, completion tracking, and notes function consistently for learners.

  • Organizations using Captivate as their main content authoring tool can maintain one-click publishing and avoid confusing double TOCs and non-functional controls for learners.

  • Organizations that rely on notes exported from Captivate content in ALM (for coaching, compliance, or records) can access the following:

    • Notes link correctly to slides.

    • PDFs are generated as expected.

Changes to Learner Transcripts

Overview

Adobe Learning Manager has revised how it calculates learning time in Learner Transcripts with its April 2026 release. Previously, the reporting logic could lead to inaccurate times if learners left the player open without engaging with the content, causing discrepancies. The new method now tracks active time based on user engagement, specifically when the tab is in focus and when there is user activity. This change results in more accurate data.

This update improves reports and dashboards, helping administrators better ensure compliance and track learner progress. After the release, review your Learner Transcripts to see these enhancements.

The updated calculation method focuses on actual engagement, such as active tab focus and recent user interactions, thereby improving the accuracy of time reporting across the following areas:

  • Learner Transcripts (UI)
  • Admin Dashboard metrics
  • Course Enrollment reports
  • APIs and Connectors

What’s changed

The Learning Time Spent column in Learner Transcripts now uses improved logic to calculate time more accurately. Instead of simply tracking player open/close times, the system now distinguishes between active and idle periods based on user engagement.

  • Active time: Time when the learner is actively engaged (for example, on the correct tab, performing actions like scrolling or watching video).
  • Idle time: Time when the learner is not engaged (for example, tab switched, no activity for 10+ minutes), which is excluded from the total.

This applies to most module types, with exceptions for SCORM, Captivate, and XAPI modules, which retain the original logic.

How it works

The new calculation varies by module type:

  • Video and audio modules: Active when the content is playing, even if the learner switches to another tab. Tab focus is not required for tracking playback time.
  • Static modules (PDF, PPT, Excel, and so on): Active if on the tab and performing activities (mouse movement, scrolling, clicking, keyboard input) within the last 10 minutes. If there is no activity for 10 minutes, it switches to idle.
  • SCORM and Captivate retain the original open/close logic.
  • xAPI now uses tab‑based active time detection, where time is counted only when the tab is active. Note that AICC content is not supported.
  • HTML, LTI, and Other Content: May vary; check Learner Transcripts for accuracy.

Idle time is subtracted, ensuring only true engagement time is reported.

Summary table

Module type
Active time (counted)
Idle time (excluded)
Video / Audio
Playback time
Not started; ended; paused >10 min
Static (PDF/PPT/DOC)
Tab active and activity in last 10 min
No activity >10 min; tab inactive
SCORM
Time reported by content runtime
Idle cannot be detected
Captivate
Slide‑based timing
Idle cannot be detected
xAPI
Tab active
Tab inactive
HTML
Player open time with tab active
Tab inactive
LTI Producer/Consumer
If LTI content is played within ALM’s player (that is, ALM is consuming LTI content hosted on another LMS acting as the Producer), then this time‑spent logic applies.

However, if the content is played outside the LMS (that is, the content is hosted in ALM, then ALM is the Producer, but the playback happens in an external player), this portion of the time‑calculation logic does not apply.
Note: LTI Consumer is not supported in Adobe Learning Manager.
Tab inactive

Note:

  • Revisits and parallel sessions: Count as active when the above conditions are met.
  • All devices, browsers, languages: Included; offline mobile usage is added after sync.

Benefits of the new calculation

  • Accurate reporting: Eliminates inflated times from unattended players, providing realistic learning durations.
  • Better compliance: Supports accurate tracking for mandatory training (for example, a company’s 5-hour monthly requirement).
  • Improved dashboards: User activity graphs and time-spent reports now reflect actual engagement.
  • Learner insights: Helps administrators identify genuine progress and address disengaged learners.

Reporting and analytics impact

  • Learner Transcripts: “Learning Time Spent” now reflects actual engagement.
  • Admin Dashboard: Metrics that include time (for example, “time spent” tiles, trends) will show lower but more realistic values in scenarios where idle time previously inflated results.
  • Course Enrollment reports: Time‑related fields adopt the new calculation post‑launch.
  • Comparability note: Because historical data is not recalculated, time‑series analyses that span the release date may show a step change. Consider annotation or segmentation by date in analytics tools.

API and connectors

  • No schema changes to existing endpoints/fields that report time spent.
  • Field semantics are updated to reflect active‑time calculation for sessions after the feature launch.
  • Connectors and exports consuming time‑spent fields will automatically receive the updated values going forward.

Backward compatibility and data migration

  • Historical sessions: Not recalculated.
  • New sessions: Use the new active‑time calculation.
  • Mixed periods: For audits or longitudinal reporting, segment by pre‑/post‑launch to avoid misinterpretation.

Known limitations

  • Interactive content (SCORM/Captivate) continues to rely on content‑provided timing; idle detection within the content is not available.
  • Iframe‑based content (HTML/xAPI) limits detection of fine‑grained interactions; tab focus is used instead.

Frequently asked questions

Does this update change historical records?

No. The change applies only to sessions after the feature launch.

How do I verify the changes?

Check Learner Transcripts for recent modules; compare times to expected durations.

Does this affect all accounts?

Yes, it’s a global update for all Adobe Learning Manager accounts.

Do learners need to take action?

No. The change is automatic and transparent to learners.

What if learners leave content open?

Idle time is now excluded, preventing over-reporting.

Are video/audio sessions auto‑paused when the tab is inactive?

No. Playback behavior is unchanged. Time is excluded when paused >10 minutes or when not actively playing.

Will offline mobile activity be reflected?

Yes. Offline usage is included when the device syncs.

What should I do if my dashboards now show lower averages?

This is expected where idle time had previously inflated results. Annotate dashboards and adjust targets as needed.

Are there any prerequisites?

None; the change is automatic.

recommendation-more-help
d5e5961a-141b-4c77-820e-8453ddef913d