JERUSALEM – A renowned Israeli journalist, Moshe Nussbaum, is making a comeback to broadcasting after losing his ability to speak clearly due to ALS. Using artificial intelligence (AI) software to recreate his distinctive voice, he'll return to Channel 12 news as a commentator.
Nussbaum, diagnosed with ALS two years ago, had initially vowed to continue working as long as physically able. However, the progressive neurological disease made clear communication increasingly difficult, ultimately impacting his ability to report live, especially during the recent Gaza conflict.
His recent interviews with injured soldiers, though initially conducted, became less frequent due to the worsening effects of the disease.
Now, in a significant development, Channel 12 announced Nussbaum's return in a commentary role. AI technology, trained on thousands of hours of Nussbaum's recordings, will generate his voice, allowing him to present pre-recorded commentary pieces.
"It took me a while to comprehend that it's truly my voice," Nussbaum said. "I'm recognizing the significant impact this technology will have for people with disabilities, myself included." The technology digitally adjusts his lips to match the spoken words.
While this technology offers a unique opportunity for Nussbaum to continue his career, concerns exist regarding its potential misuse. The AI cannot yet be used for live broadcasts and will instead focus on pre-recorded commentary. Nussbaum recognizes the dangers associated with AI-generated content.
This development raises awareness of the evolving role of technology in assisting people with disabilities in maintaining their careers. A preview clip showcased the technology's effectiveness, showcasing a strikingly similar voice to his original.
"Honestly, it feels a bit unusual to be back in the studio after more than a year," the AI-enhanced Nussbaum remarked in the preview. "It's deeply emotional for me."
Though this technology has demonstrated potential in aiding individuals affected by speech impairments, potential misuse, including deepfakes, remains a critical consideration. The recent use of similar technologies to generate fake audio clips highlights the need for responsible deployment of this powerful tool.