Video closed captions in Dynamic Media

Closed captions are auto-generated once a video file is uploaded to AEM Assets with Dynamic Media.

Transcript
Today I’ll be talking about the upcoming video capability for AEM assets Dynamic Media. Till now, Dynamic Media supported adding multiple audios and captions to the videos and delivered them at scale seamlessly. With this capability, users will be able to generate captions for the videos via AI for more than 60 languages. These captions help make the videos accessible and fit for global distribution. If you already have enabled dash and multi-audio captions on your environments, you will get this by default with the latest AEM CS release in July. If not, the prerequisite to get this capability is via raising a support ticket. Let’s look into how it works. Here, I’ve added a new video and let it process. After the video is done processing, I open the properties page and go to the tab that says captions and audio tracks. This is the tab where I would usually add the additional captions and audios. In the same tab now, with this capability, I can click on create caption and it gives me two options, convert from audio and translate from captions. When I select convert from audio, I can select existing audio file. This is my original audio that I have selected. Now, in the other dropdown, in output languages, I can select up to multiple languages and press done. Here, I’ve selected three languages, English, Hindi and Chinese. Once I do that, when I press save, this automatically starts the process of converting the captions. I can see that the file is processing now. After some time, the files would be processed and AI-based captions would be generated. I can easily preview those captions by clicking on the video file, navigating to the viewers section, clicking one of the viewers and playing the file. So here, I can see that all three of the captions are generated with my video. Take a virtual tour of the 8 by 6 foot supersonic wind tunnel at NASA’s Glenn Research Center and discover where NASA researches high-speed regions of flight. As NASA’s only transonic propulsion wind tunnel, this facility can test aircraft models for high-fuel burning engines and models from Mach 0.26 to Mach 2. Providing a high-speed test environment for aircraft and rocket designs for more than 70 years, this wind tunnel has proven its worth by enhancing the nation’s aeronautics and space programs, proving NASA’s with you when you fly. So this was the scenario where three captions were made using the audio file. Let’s look at another flow. Let’s say that you look at the VTT and while previewing it, you find a mistake. In that case, you would want to easily correct it. That can be easily done by downloading this AI-generated VTT file, editing it in any text editor, like I’m doing right now. Once you make that change, you can just upload the VTT file via existing flow and replace the existing VTT and have the new edited captions. In this scenario, now you would want the translation to happen via this edited caption file. That can be done easily via the translate from caption option, where you select your original edited VTT file that you want to translate to different languages. In a scenario where you have pre-generated VTTs of English language, you can use the translate flow to create that and create captions into different languages. These changes can be previewed in the same way going to the viewers page and opening one of the viewers. Thanks for watching this demo.
recommendation-more-help
a483189e-e5e6-49b5-a6dd-9c16d9dc0519