c US Air Force veteran finds hope in suicide prevention therapy focusing on stress responses By www.ksat.com Published On :: Tue, 29 Oct 2024 09:17:35 EDT For many people, it’s a ray of hope rooted in research. Recent research is presenting a new way of thinking about suicide prevention, and both experts and patients believe it’s already saving lives. Full Article
c American College of Surgeons Releases Revised Best Practices Guidelines in Management of Traumatic Brain Injury By www.facs.org Published On :: Wed, 30 Oct 2024 08:20:18 EDT The American College of Surgeons (ACS) has announced the release of its revised Best Practices Guidelines in the Management of Traumatic Brain Injury, offering healthcare providers comprehensive strategies to improve the care and outcomes of patients with traumatic brain injury (TBI). Full Article
c ‘The legacy I want to leave': How a Maine teen is helping with concussion research By www.nbcboston.com Published On :: Wed, 30 Oct 2024 08:25:15 EDT A high school senior from Bangor, Maine, has decided that when the time comes, he will donate his brain to science. Fionn Parker-Cummings is the youngest person to donate his brain to the University of Pittsburgh National Sports Brain Bank. Researchers will check in with him every year for the rest of his life. Parker-Cummings said he hopes the contribution will advance traumatic brain injury research to protect players like NFL quarterbacks Drake Maye and Tua Tagovailoa, who have recently suffered concussions. Full Article
c The VA is undeterred on psychedelics By www.politico.com Published On :: Thu, 31 Oct 2024 09:05:51 EDT The Department of Veterans Affairs remains convinced that psychedelic therapy can help treat veterans with post-traumatic stress disorder — and it’s going to try to prove it. That commitment comes despite the FDA’s August rejection of a drug company’s plan to offer a psychedelic regimen for PTSD. Full Article
c Blood Test Can Benefit NFL Concussion Return-to-Play Decisions By www.newsweek.com Published On :: Thu, 31 Oct 2024 09:09:45 EDT If professional athletes are meant to be real-life superheroes, Miami Dolphins star Tua Tagovailoa looked remarkably mortal on September 12. With his team down 31-10 in the third quarter against the Buffalo Bills, the quarterback spotted a gap at the line of scrimmage and decided to scramble for a first down. When he encountered Bills' defender Damar Hamlin, Tagovailoa lowered his head and ran straight into him. Tagovailoa stayed down injured after the play. He had suffered the third concussion of his NFL career, on top of the one he was diagnosed with during his time in the collegiate ranks. Full Article
c NHLPA hopes committee helps players 'better understand CTE' By www.espn.com Published On :: Mon, 04 Nov 2024 08:53:59 EST The NHLPA will form an advisory committee to help hockey players better understand chronic traumatic encephalopathy and the damage that concussions can do to the brain. NHLPA president Marty Walsh said Friday night that the CTE committee was unanimously approved at a union board meeting this summer. The former Boston mayor made the announcement after receiving an award at the annual gala for the Concussion Legacy Foundation, which supports patients and families struggling with the symptoms of traumatic brain injuries. Full Article
c How Can We Better Support the Mental Well-Being of Veterans Returning from Service? By www.bu.edu Published On :: Wed, 06 Nov 2024 08:12:39 EST Military service is difficult, dangerous, and demanding. But for some veterans, returning to civilian life can also be a challenge. Going from the structure of active duty—with its purpose-driven roles, full-throttle environment, and sense of camaraderie—to the relatively humdrum life of a civilian can be jarring, particularly for veterans who’ve experienced trauma during their service and may suffer from post-traumatic stress disorder (PTSD). Researchers at Boston University, including Dawne Vogt and Casey Taft—both professors of psychiatry in the Chobanian & Avedisian School of Medicine—are utilizing a variety of data-backed strategies to support veterans struggling to adapt to their new normal lives. Full Article
c Recognizing traumatic brain injury as a chronic condition fosters better care over the survivor's lifetime By medicalxpress.com Published On :: Wed, 06 Nov 2024 08:15:36 EST A commentary, published in the Journal of Neurotrauma, calls for traumatic brain injury to be recognized as a chronic condition as are diabetes, asthma, depression and heart failure. To provide comprehensive care for traumatic brain injury throughout individuals' lifespans, the authors propose that coordinated care models they and others have developed, tested and applied to various populations—including older adults, individuals living with depression and post-intensive care unit survivors—be adapted to improve communication and integration between brain injury specialists—including physical medicine and rehabilitation clinicians—and primary care physicians, fostering better long-term patient care for traumatic brain injury survivors and more support for both patients and their families. Full Article
c Make Art or Die? For Some Veterans Creating Art Can Heal from Trauma By www.pbs.org Published On :: Mon, 11 Nov 2024 09:20:23 EST Most people may be familiar with cognitive behavioral therapy (CBT) and other common types of psychotherapy, but art therapy remains somewhat of a mystery to the general public. Not incorporating it into a treatment plan could be a huge missed opportunity for veterans who’ve experienced trauma and other patients in need of mental health support. “Art is a means of therapy and a way to heal,” says Marine Corps veteran Jerry Rael. “It helps me escape some of the things that I went through during my time in service.” Art can also be therapeutic for families who have lost a loved one in battle, as seen in Make Peace or Die: Honor the Fallen. In the film, Marine Anthony Marquez carves battlefield crosses out of wood for each lost service member and then hand-delivers them to Gold Star families, including a grieving mother who had attempted suicide. Full Article
c Hyperbaric Oxygen Therapy Shows Promise in Treating PTSD By www.psychiatrist.com Published On :: Tue, 12 Nov 2024 09:00:20 EST New research data offers much-needed hope for veterans wrestling with combat-associated posttraumatic stress disorder (PTSD). The study explores the use of hyperbaric oxygen therapy (HBOT) to address symptoms that persist despite standard treatments. Full Article
c Chronic Brain Trauma Is Extensive in Navy’s Elite Speedboat Crews By www.nytimes.com Published On :: Tue, 12 Nov 2024 09:14:13 EST The pounding that sailors’ brains take from years of high-speed wave-slamming in the Special Boat Teams can cause symptoms that wreck their careers — and their lives. Full Article
c 13 Awesome Tools & Resources for Designers and Agencies for 2023 By webdesignledger.com Published On :: Thu, 26 Jan 2023 08:00:26 +0000 Web design tools and resources, both free and premium, are designed to help you take full advantage of your creativity. Thereby enhance your ability to create stunning websites. There is a huge variety of products and services to choose from. They are ranging from complete platforms to specialized software applications. Just like design frameworks, wireframing […] Full Article Editors Pick Featured Resources Tools tools for web designers Web Design Resources
c 10+ Best WooCommerce Themes for 2023 By webdesignledger.com Published On :: Tue, 21 Mar 2023 08:00:52 +0000 Are you looking for the best WooCommerce theme to help you build your online store, but are afraid of making a wrong choice? For starters, you want to select a WordPress WooCommerce theme since not all WordPress themes offer complete WooCommerce support. If a theme doesn’t support WooCommerce, it won’t support all the available addons […] Full Article Editors Pick Featured Inspiration Resources Web Design WordPress woocommerce WooCommerce themes
c 10 Excellent WordPress Themes for 2024 By webdesignledger.com Published On :: Thu, 04 Jan 2024 15:00:26 +0000 There is certainly no shortage of excellent WordPress themes you can select from to help you create an engaging and high performing website. Finding the best one for the job would most certainly be a challenge as there are literally thousands of great WordPress themes available plus you’ll have to define just what the makeup […] Full Article Inspiration Resources Trends Web Design WordPress wordpress themes
c 10+ Excellent Web Design Tools & Resources to Use in 2024 By webdesignledger.com Published On :: Thu, 15 Feb 2024 13:00:00 +0000 It’s a New Year, but one thing hasn’t changed. The number of web design resources and tools just keeps on increasing. That’s a good thing. But it does make it that much more difficult to find a theme, plugin, or resource you really have a need for if you are to stay abreast of or […] Full Article Editors Pick Featured Resources Tools best web design resources tools for web designers Web Design Tools
c Webdesigners Favorite Comfort Purchases By webdesignledger.com Published On :: Mon, 08 Jul 2024 20:17:40 +0000 Burts bees Full Article Uncategorized
c How and Why Should Healthcare Providers Ask Patients If They Have PTSD? By www.brainline.org Published On :: Wed, 04 Dec 2013 00:00:00 EST How and why should healthcare providers ask patients if they have PTSD? Click here to see other video Q&As with Lt. Col. Holcombe. Click here to return to our BrainLine Military Ask the Expert feature. Full Article
c The Importance of Getting Your Teenager with TBI to Join a Support Group By www.brainline.org Published On :: Mon, 30 Dec 2013 00:00:00 EST It’s hard to get teenagers to do anything they fear will make them feel “other,” but a TBI support group will help during recovery. Full Article
c Can Ritalin Help Mitigate Brain Injury Symptoms? By www.brainline.org Published On :: Mon, 06 Jan 2014 00:00:00 EST Learn about the risks and benefits of taking Ritalin after a brain injury. Full Article
c Testing Brain Injury-Related Vision Issues with People Who Can’t Communicate By www.brainline.org Published On :: Mon, 27 Jan 2014 00:00:00 EST How do you test the vision of someone with TBI who cannot communicate — talk or respond otherwise to questions? Full Article
c How Accurate is the Movie, "The Vow"? By www.brainline.org Published On :: Mon, 10 Mar 2014 00:00:00 EDT Real life is always more complicated than fiction, especially when it comes to brain injury. Full Article
c A Brain Injury Does Not Mean a Person Loses His Civil Rights By www.brainline.org Published On :: Mon, 31 Mar 2014 00:00:00 EDT The judicial system is very reluctant to deprive any individual of the freedom to make important financial and personal decisions. Full Article
c Can a Hit to the Jaw Cause TBI? By www.brainline.org Published On :: Mon, 21 Apr 2014 00:00:00 EDT The jaw, skull, neck, and brain are all connected. A blow to one can cause damage elsewhere. Full Article
c Why Does Everything Hurt So Much After Brain Injury? By www.brainline.org Published On :: Tue, 27 May 2014 00:00:00 EDT More than 50 percent of people suffer from chronic pain disorders in the years following a brain injury. Getting the right treatment can be complex, but doable. Full Article
c What Is the Ideal Timeline to Do MRIs, fMRIs, DTIs, and Fiber Tracking to Diagnose TBI? By www.brainline.org Published On :: Mon, 16 Jun 2014 00:00:00 EDT More than imaging, diagnosis of TBI is still based on the initial symptoms at the time of the injury. Full Article
c Changes in Taste, Smell, and Hormones After Brain Injury By www.brainline.org Published On :: Mon, 30 Jun 2014 00:00:00 EDT How and why does a TBI change a person’s ability to taste and smell, and cold hands and feet, too? Full Article
c Can Symptoms of Hypopituitarism Present Years After a Brain Injury? By www.brainline.org Published On :: Wed, 03 Sep 2014 00:00:00 EDT Symptoms of hypopituitarism — a condition in which the pituitary gland does not produce normal amounts of some or all of its hormones — would most likely show up soon after a TBI. Full Article
c Can the Drugs Namenda and Aricept Help After Brain Injury? By www.brainline.org Published On :: Mon, 22 Sep 2014 00:00:00 EDT Learn about the risks and benefits of these drugs for symptoms of brain injury. Full Article
c This is why you don’t need more than a 20MP camera By www.diyphotography.net Published On :: Mon, 11 Nov 2024 15:27:59 +0000 How many megapixels is your camera? What about your phone? What about the camera you used ten years ago? If you’re anything like me, you... The post This is why you don’t need more than a 20MP camera appeared first on DIY Photography. Full Article Tutorials Canon Nikon
c 7Artisans 10mm f2.8 II brings low-budget fisheye to your full frame cameras By www.diyphotography.net Published On :: Mon, 11 Nov 2024 17:35:44 +0000 Fisheye lenses were once a very popular type of lens. Used for everything from epic ’90s skateboarding videos to photographing sports stadiums. These days, though,... The post 7Artisans 10mm f2.8 II brings low-budget fisheye to your full frame cameras appeared first on DIY Photography. Full Article Gear 7Artisans Gear Announcement lens
c Capturing Bailey’s Bead: top tips from an award-winning astrophotographer By www.diyphotography.net Published On :: Tue, 12 Nov 2024 08:49:49 +0000 Solar eclipses are magical. What is more magical about solar eclipses are the different moments within them, including the Baily’s Beads. During a total solar... The post Capturing Bailey’s Bead: top tips from an award-winning astrophotographer appeared first on DIY Photography. Full Article Inspiration Astrophotography
c Rode goes tiny with the Rode Wireless Micro for smartphones By www.diyphotography.net Published On :: Tue, 12 Nov 2024 10:53:49 +0000 Rode, the company that kicked off the wave of compact wireless microphones seen over the last few years, has scaled down even further for its... The post Rode goes tiny with the Rode Wireless Micro for smartphones appeared first on DIY Photography. Full Article news Gear Announcement Microphone Rode Microphones
c Susan Meiselas awarded Sony World Photography Awards’ Outstanding Contribution to Photography 2025 By www.diyphotography.net Published On :: Tue, 12 Nov 2024 12:42:50 +0000 In a significant recognition of her pioneering role in documentary photography, Susan Meiselas has been awarded the 2025 Outstanding Contribution to Photography by the Sony... The post Susan Meiselas awarded Sony World Photography Awards’ Outstanding Contribution to Photography 2025 appeared first on DIY Photography. Full Article news Sony World Photography Awards
c Fuji is making its first cinema camera, and it’s got a medium format GFX sensor By www.diyphotography.net Published On :: Tue, 12 Nov 2024 14:45:59 +0000 In a surprise announcement, Fujifilm is entering the cinema camera market. But instead of leveraging their APS-C X-Trans sensor tech to make Super35 cameras, they’re... The post Fuji is making its first cinema camera, and it’s got a medium format GFX sensor appeared first on DIY Photography. Full Article Gear Camera Fujifilm Gear Announcement
c Why use a camera instead of your phone… Or the other way around By www.diyphotography.net Published On :: Tue, 12 Nov 2024 17:15:44 +0000 Why use a camera anymore when you can just use your phone? Or why use your phone when you have that expensive camera. Emily from... The post Why use a camera instead of your phone… Or the other way around appeared first on DIY Photography. Full Article Gear dslr Mirrorless camera phone photography
c Everything you want to know about the FujiFilm GFX ETERNA Cinema Camera By www.diyphotography.net Published On :: Wed, 13 Nov 2024 08:00:45 +0000 A bit of a surprise announcement from Fujifilm! The GFX ETERNA – a cinema camera with a medium format sensor, putting it to to toe... The post Everything you want to know about the FujiFilm GFX ETERNA Cinema Camera appeared first on DIY Photography. Full Article Gear Fujifilm
c Exclusive DIYP report: phone fakes a Hasselblad lens By www.diyphotography.net Published On :: Wed, 13 Nov 2024 09:01:10 +0000 If you are in the market for a Hasselblad phone, you’ll probably end up with a OnePlus unit, as they have a long collaboration with Hassy. The... The post Exclusive DIYP report: phone fakes a Hasselblad lens appeared first on DIY Photography. Full Article Gear Hasselblad
c Auphonic Joins NVIDIA Inception By auphonic.com Published On :: Wed, 21 Sep 2022 10:07:24 +0000 We are proud to announce that we recently joined the NVIDIA Inception Program, which will help to speed up our deep learning development process and therefore offer the best possible audio processing tools to our users. What is NVIDIA Inception NVIDIA is a global leader in hardware and software for Artificial Intelligence (AI). Their NVIDIA Inception Program will enable us to leverage NVIDIA's cutting-edge technology by accessing more diverse cloud and GPU (Graphics Processing Unit) product offerings, which are used in most Machine Learning and Deep Learning model training instances worldwide. This will allow us to streamline AI development and deployment and train bigger machine-learning models to test and evaluate algorithms faster. The program will also offer us the opportunity to collaborate with industry-leading experts and other AI-driven organizations, among other things. Our Deep Learning Development Process For our development process, more GPU capacity means a great saving of time and therewith of course a saving of costs. As an example, one training cycle of our dynamic denoiser model takes almost a week trained with GPUs, however the same training cycle trained with CPUs would take several months. To illustrate, a CPU (Central Processing Unit) can be compared to a race car, which is very fast but can only transfer a small number of packages, while a GPU in this comparison is a big truck, which can transfer a huge number of packages more slowly. Deep learning algorithms require for training very large datasets consisting of thousands of files, therefore our 'trucks', the GPUs, are the best hardware to choose processing multiple computations simultaneously. The more GPU capacity we can use, the faster we get results for our tested algorithms, and the faster we know which way we should follow to offer our users the best possible audio processing tools. Unfortunately, the world is right in the middle of a Global Chip Shortage, so the latest GPUs are very hard to get and super expensive to purchase – unless you have a partnership with a GPU manufacturer. Conclusion We are happy to join such a renowned program and look forward to the updates to our product that we will be able to implement and potentially a greater industry transformation. You can read our full press release here: AuphonicNVIDIAInceptionPressRelease (pdf) Full Article News
c Auphonic Speech Recognition Engine using Whisper by OpenAI (Beta) By auphonic.com Published On :: Tue, 08 Nov 2022 08:55:26 +0000 Today we release our first self-hosted Auphonic Speech Recognition Engine using the open-source Whisper model by OpenAI! With Whisper, you can now integrate automatic speech recognition in 99 languages into your Auphonic audio post-production workflow, without creating an external account and without extra costs! Whisper Speech Recognition in Auphonic So far, Auphonic users had to choose one of our integrated external service providers (Wit.ai, Google Cloud Speech, Amazon Transcribe, Speechmatics) for speech recognition, so audio files were transferred to an external server, using external computing powers, that users had to pay for in their external accounts. The new Auphonic Speech Recognition is using Whisper, which was published by OpenAI as an open-source project. Open-source means, the publicly shared GitHub repository contains a complete Whisper package including source code, examples, and research results. However, automatic speech recognition is a very time and hardware-consuming process, that can be incredibly slow using a standard home computer without special GPUs. So we decided to integrate this service and offer you automatic speech recognition (ASR) by Whisper processed on our own hardware, just like any other Auphonic processing task, giving you quite some benefits: No external account is needed anymore to run ASR in Auphonic. Your data doesn't leave our Auphonic servers for ASR processing. No extra costs for external ASR services. Additional Auphonic pre- and post-processing for more accurate ASR, especially for Multitrack Productions. The quality of Whisper ASR is absolutely comparable to the “best” services in our comparison table. How to use Whisper? To use the Auphonic Whisper integration, you just have to create a production or preset as you are used to and select “Auphonic Whisper ASR” as “Service” in the section Speech Recognition. This option will automatically appear for Beta and paying users. If you are a free user but want to try Whisper: please just ask for access! When your Auphonic speech recognition is done, you can download your transcript in different formats and may edit or share your transcript with the Auphonic Transcript Editor. For more details about all our integrated speech recognition services, please visit our Speech Recognition Help and watch this channel for Whisper updates – soon to come. Why Beta? We decided to launch Whisper for Beta and paying users only, as Whisper was just published end of September and there was not enough time to test every single use case sufficiently. Another issue is the required computing power: for suitable scaling of the GPU infrastructure, we need a beta phase to test the service while we are monitoring the hardware usage, to make sure there are no server overloads. Conclusion Automatic speech recognition services are evolving very quickly, and we've seen major improvements over the past few years. With Whisper, we can now perform speech recognition without extra costs on our own GPU hardware, no external services are required anymore. Auphonic Whisper ASR is available for Beta and paying users now, free users can ask for Beta access. You are very welcome to send us feedback (directly in the production interface or via email), whether you notice something that works particularly well or discover any problems. Your feedback is a great help to improve the system! Full Article Development News
c New Auphonic AutoEQ Filtering (Beta) By auphonic.com Published On :: Tue, 24 Jan 2023 09:35:03 +0000 In addition to our Leveler, Denoiser, and Adaptive 'Hi-Pass' Filter, we now release the missing equalization feature with the new Auphonic AutoEQ. The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording, to remove sibilance (De-esser) and to create a clear, warm, and pleasant sound - listen to the audio examples below to get an idea about what it does. Screenshot of manually adjusted example settings for the equalizer plug-in 'Pro-Q3' by fabfilter. What is Equalization and why is it difficult? Equalization (EQ) in audio recording and reproduction is the process of adjusting the volume of different frequency bands within a signal. The following vocal EQ cheat sheet, published by Producer Hive, will give you a small impression, of what can be influenced by equalizing: Vocal EQ Cheat Sheet by Producer Hive. On the other hand, it is very easy to ruin a good voice recording with manual heavy-handed equalization, resulting in voices, that sound very sharp or muddy or even like the speaker had a blocked nose. Besides the skill and experience of an audio engineer, manual adjustments of frequencies also require a very good and linear studio playback device. For example, performing manual equalization with strongly bass-heavy speakers would most likely lead to a very sharp, unpleasant listening experience using treble-heavy headphones. For singletrack productions with more than one speaker, equalizing is also a very complex and time-consuming process, as every voice has its unique frequency spectrum and needs its own equalization. One could separate speakers with cuts or create a track envelope to fade from one speaker to another, however, any solution is a very tedious job, if you do it by hand. That is where the Auphonic AutoEQ comes in! All those steps are now available in just one click! How does the Auphonic AutoEQ work? The Auphonic Web Service analyzes your audio content and classifies the audio file into small and meaningful segments like music, silence, different speakers, etc. to process every single segment with the best matching algorithms. All our features like the Adaptive Leveler, Dynamic Denoising, Adaptive 'Hi-Pass' Filtering, and now the new AutoEQ filter option are built on top of this basic processing. Using Auphonic AutoEQ, spectral EQ profiles are created for each speaker separately and permanently changing over time. The aim of those time-dependent EQ profiles is to create a constant, pleasant sound in the output file even if there are slightly changing voices in the record, for example, due to modified speaker-microphone positions. Audio Examples Here are two short audio examples, which demonstrate some features of our AutoEQ. We recommend listening with headphones so you can hear all the details. Example 1. Female Speaker with Background Music In the following example (BCB: The Voices of Bainbridge Island) of a female narrator speaking while background music is playing, you can easily recognize quite sharp 'sss' sounds in the female voice. This sharpness in the female voice is removed by the so-called De-essing feature of the Auphonic AutoEQ, while the background music is not changed. Original: AutoEQed: Example 2. Dialog of Male and Female Speakers The next example (BCB: The Voices of Bainbridge Island) shows how the AutoEQ optimizes a singletrack record containing two speakers with different voice characteristics. Our AutoEQ algorithms analyze each voice separately and calculate the matching frequency adjustments to optimize the voice of every single speaker. Original: AutoEQed: AutoEQ Beta Integration in the Auphonic Web Service To use the Auphonic AutoEQ, you just have to create a production or preset as you are used to, toggle “Advanced Parameters” on the top right in the section “Audio Algorithms” and select “Voice AutoEQ” within “Filtering”: For a first test period, the AutoEQ will only be available for Beta and paying users, to incorporate your feedback and finalize an optimized version. If you are a free user but want to try Auphonic AutoEQ: please just ask for access! Practical Tips For best results using Auphonic AutoEQ, however, it is still necessary, that your audio content is of sufficiently good quality, as no equalizer can make up frequencies that are not there in the first place. Audio files with low bitrates often lack important frequencies, that cannot be recovered by equalizing. AutoEQ is just a feature to boost or cut individual frequency bands, not a bandwidth extension. For more information about required audio quality, see the former blog post: Audio File Formats and Bitrates for Podcasts. Another important topic is the definition of the 'best result'. Equalizing is a very subjective task, that differs a lot depending on every personal opinion. So Auphonic AutoEQ is set up to follow quite conservative rules of equalizing and rather apply subtle tweaks and remove obvious problems, than support personal preferences. This also means, your record will experience no significant changes from Auphonic AutoEQ if it sounds reasonably OK or pretty good already. Conclusion Auphonic audio post production algorithms keep getting better and better in leaps and bounds lately, offering you new Beta Features: Beta Auphonic Denoiser, Beta Auphonic Speech Recognition, and Beta Auphonic AutoEQ. Right now we are fine-tuning all our current Beta Features with high intensity to release a new upgraded version of our Auphonic Web Service as soon as possible. Please watch this channel for further updates – soon to come. If you have any feedback for us or want to become a Beta user, you are very welcome to comment directly in our production interface or to contact us via email! Full Article Audio Development News
c New Auphonic Website, Free Advanced Algorithms and More Examples By auphonic.com Published On :: Wed, 01 Mar 2023 10:30:19 +0000 To start a new decade of automatic audio post production with Auphonic, we are happy to launch a few updates: New Website Design Opening the new homepage today, you might have noticed that our website looked different from what you had been used to before. Keeping our customers’ feedback from last year in mind, we designed a new vision for Auphonic. Our new website features a refreshed look with an improved, more engaging, and functional user experience. Moreover, a more straightforward, intuitive, and accessible navigation will give you a seamless workflow and a comfortable exploration of Auphonic’s features. We hope it will be easier to explore the diversity of applications that Auphonic has. In the end, however, as before, you will have the same full functionality of Auphonic available to you and some extra features if you are using our paid packages or subscriptions. Take a look yourself: New Auphonic Landing Page Free Access to our Advanced and Beta Algorithms In the past, only paying Auphonic users had access to the advanced algorithm parameters, to multitrack advanced audio algorithms, and to our Dynamic Denoising and AutoEQ beta models. We now enabled all advanced algorithms for free users, and you can use them for 2 hours of audio free each month! Using the Dynamic Denoiser, you can define whether Auphonic should remove only static or also fast-changing noises and if we should keep or eliminate music. For even greater speech intelligibility control, it is possible to manually adjust the amount of denoising to strike the perfect balance between clarity and ambiance. The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording to remove sibilance (De-Esser) and to create a clear, warm, and pleasant sound. The equalization of multi-speaker audio can be complex and time-consuming, as each voice requires its own unique frequency spectrum equalization. Our AutoEQ simplifies this process by creating separate, time-dependent EQ profiles for each speaker, ensuring a consistent and pleasant sound output despite any changes in the voices during the recording. Our advanced algorithm parameters help you to meet all common audio specifications of platforms like Netflix, Audible, podcasts, broadcasters (EBU R128, ATSC A/85, radio and mobile, commercials) in one click. You can define a set of target parameters (integrated loudness, true peak level, dialog normalization, MaxLRA, MaxM, MaxS), like -16 LUFS for podcasts, and we will produce the audio accordingly. In addition, they offer more control for multitrack productions and for the Adaptive Leveler. We would like to give away free hours for new Auphonic users, to try out our free advanced algorithms. Please use this URL to register your new Auphonic account. the code is valid till end of March 2023 and will give you 5 extra production hours for the next month. Happy content creation! More Audio Examples There is no better way to experience Auphonic than hearing the difference our post production tool makes when applied to different types of audio and content. We are happy to share that our new features page now contains some new audio examples you can listen to explore our web tool, and we will add even more examples in the next weeks. Full Article News
c Introducing Metadata Variables in Auphonic By auphonic.com Published On :: Tue, 04 Apr 2023 05:56:13 +0000 We've listened to your feedback and are excited to announce the introduction of metadata variables in Auphonic for more advanced use of our Basic and Extended Metadata. This new feature allows you to use metadata fields from your input files to automate workflows. You can easily reference any field by using { curly brackets } and typing the field name, such as {title}, {artist}, {album}, {track}, and more. To get started, take a look at our Formatting Examples and the Table of all Variables to see all the available options. Whether you are using the Auphonic Web Service or our API, metadata variables can be applied whenever metadata values are set. They are particularly helpful when working with Presets, Batch Productions, or Watch Folders! For instance, consider the left column of the following table as input ending up in the metadata as shown in the right column: Field Input Output Album MyPodcast MyPodcast Track 25 25 Title Episode No. {track} of {album}! Episode No. 25 of MyPodcast! Note: Please mind that fields can not refer to each other in cycles (e.g., if {title} refers to {album}, {album} may not refer to {title}). While the input form will not show any errors, the metadata will most likely not be correct and the production will generate a warning. Formatting Examples 1. Generating Output File Basenames With metadata variables you can automatically generate your output file basename based on the input filename and podcast metadata, like album, track, and title. If you have, for example, a podcast preset or input file with the album name "The Placeholder Podcast", you can automatically name your episode title like your input filename. By combining the metadata variables "album" name and your generated episode "title" with any text patterns, like "denoised" in this case, you can create your individual output file basename: Field Input Output Album The Placeholder Podcast The Placeholder Podcast Title {input_filename} interview_jane_doe.wav Output File Basename {album}-{title}-denoised The Placeholder Podcast-interview_jane_doe.wav-denoised The next example shows how you can create truly unique output file basenames with timestamps. Here, a track number is, together with the input file basename, added up as episode title. So, your unique output file basename could be a combination of this generated episode title with the time and date, when your Auphonic production was started: Field Input Output Input File Basename interview_jane_doe interview_jane_doe Track 25 25 Title {track}{input_basename} 25interview_jane_doe Output File Basename {title}_{production_created_at:%H:%M_%m/%d} 25interview_jane_doe_19-05_01-30 See example 3 for more time formatting examples, and this table for the full list of symbols that can be used for formatting dates. 2. Deriving "Title" and "Summary" from Podcast Metadata If the input file metadata contains a track number (alternatively, provided by API or web form), it can be referenced in other fields. Along with the name of the podcast, stored in the "album" field, a value for the title can be created as well as a value for a summary containing all the information: Field Input Output Track 25 25 Album The Placeholder Podcast The Placeholder Podcast Tags Anniversary Anniversary Title {album}, Episode No. {track} The Placeholder Podcast, Episode No. 25 Summary {title} - {tags.0} The Placeholder Podcast, Episode No. 25 - Anniversary 3. Adding Time and Date The following time and date example outputs would be possible for a production created at 7:05 pm on Saturday, January 30th in 1999: Field Input Output (any) example-{production_created_at} example-1999-01-30 (any) podcast-{production_created_at:%H:%M-%m/%d/%Y} podcast-19:05-01/30/1999 (any) output-{production_created_at:%I:%M%p-%m%d%y} output-7:05PM-013099 (any) record-{production_created_at:%a-%Y-%b-%d} record-Sat-1999-Jan-30 See this table for the full list of symbols that can be used for formatting dates. 4. Using List Field "Outputfiles" For the list fields "tags", "chapters", "outputfiles", and multitrack "input_filename/basename", you need to reference every value separately by adding .N to your variable – Where N stands for ascending ordinal numbers starting from 0. Combined with the .N you can refer to the format, bitrate, suffix, and ending of every selected output file, for example {outputfiles.0.format} refers to the format of the first output file in your list of outputfiles: Output File 1 – format: WAV 16-bit PCM, bitrate: optimal, suffix: lossless, ending: wav Output File 2 – format: MP3, bitrate: 112 kbps, suffix: lossy, ending: mp3 Field Input Output (any) file1-{outputfiles.0.suffix}-{outputfiles.0.ending} file1-lossless-wav (any) file2-{outputfiles.1.format}-{outputfiles.1.bitrate}kbps file2-lossy-mp3-112kbps (any) file2-bitrate-{outputfiles.1.bitrate:04} file2-bitrate-0112 The bitrate output of the last row is formatted with 4 digits, defined by the suffix :04 attached to the variable. For all available options, please see the Table of List Variables. All Metadata Variables The following variables are available: Variable Referring to Field {input_filename} Full filename of the input file in a singletrack production {input_basename} Basename of the input file in a singletrack production (inputfile.wav becomes inputfile) {title} Title {artist} Artist {album} Album {track} Track {genre} Genre {year} Year {subtitle} Subtitle {publisher} Publisher {url} URL {license} License (Copyright) {license_url} License URL {summary} Summary (Description) {output_basename} Output File Basename {production_created_at} Time and date of production creation {production_modified_at} Time and date of production modification List Variables Referring to List Field {input_filename.N} Full filename of the input file of N-th track in a multitrack production {input_basename.N} Basename of the input file of N-th track in a multitrack production (inputfile.wav becomes inputfile) {tags.N} N-th Element in Tags {chapters.N.start} Start time of N-th Chapter {chapters.N.title} Title of N-th Chapter {chapters.N.url} URL of N-th Chapter {chapters.N.image} Image file name of N-th Chapter {outputfiles.N.format} Format of N-th Output File {outputfiles.N.bitrate} Bitrate of N-th Output File {outputfiles.N.suffix} Suffix of N-th Output File {outputfiles.N.ending} Format ending of N-th Output File For detailed use, please see Formatting Examples. Conclusion Metadata Variables are a powerful tool for organizing your productions whenever metadata values are set. Those field references are very convenient when distinguishing between different files at a glance, particularly when working with Presets, Batch Productions, or Watch Folders. Please do not hesitate to contact us if you have any questions or feedback! Full Article News
c Automatic Silence Cutting By auphonic.com Published On :: Tue, 09 May 2023 10:13:21 +0000 Cutting audio can be a rather tedious task. It requires a decent amount of time and is quite repetitive. Often silence segments, like speech breaks or breathing pauses, make cutting audio necessary in the first place. Photo by Daniel Schludi on Unsplash We introduce our new automatic silence cutting feature! It will make your life easier by saving you the time you would normally require to cut silence segments in your recordings. Why do you need to cut silence segments? Silence segments occur in your audio recordings naturally. They can be due to short speech breaks or breathing pauses. It's also possible, that at the beginning of a recording equipment needs to be re-adjusted, which also may result in a few seconds of silence. Usually, listeners do not want to hear silence segments. The reason is easy: silence is redundant filler content. Hence, silence segments need to be cut to achieve a high-quality listening experience. Locating the segments and cutting them manually is tedious though and takes a decent amount of time for longer recordings. This is why we developed and released our own automatic silence cutting feature. How do we cut silence for you? Our automatic silence cutting algorithm detects and cuts silence segments reliably. All you need to do is to enable the algorithm in your production - no further settings are required! This is the easiest possible way for you to cut silence in an audio file. How to enable silence cutting for your production. In our audio processing system, silence cutting is a multi-stage algorithm. We use our sophisticated voice activity detection algorithms to detect what is useful content and what is silence. Then we cut the silence parts and crossfade the remaining audio segments to make sure no audio artifacts are introduced. When cutting we ensure that intended speech breaks, e.g. between two sentences, remain untouched. When we cut any audio, we make sure that chapter marks and speech recognition transcripts are adapted accordingly. Additionally, when exporting to other external services we make sure the cut tracks are exported and correctly labeled. Audio Examples Let's listen to two audio examples. Example 1 (Singletrack production) Here we have a singletrack production with three silence segments. The segments are located at the beginning of recording, between the first and the second "Hi" as well as at the end. Each silence segment is cut down to a length which still sounds natural, but does not annoy the listener. The breathing onsets, which happen straight before voice kicks in again, are not cut. This makes sure that the character of the recording remains natural. For this example, we also activated our new AutoEQ feature to remove pops in the audio. Original: Cut: Here is also a screenshot of how the result looks in our audio player. The gray areas show where our audio processing system cut the silence. The result of example 01, as displayed by our Audio Inspector. Thx to the Feel Free to Deviate podcast for providing this recording. Example 2 (Multitrack production) Let's also look at silence cutting in a multitrack production. This small excerpt of the TVEye podcast contains a music track and three speaker tracks. Before the intro music kicks in, there are a few seconds of silence. Un-cut: Cut: The few seconds of silence at the beginning of the file are cut. After that, no further cuts are applied though to make sure the background music remains intact. This is how the result of example 02 looks in our Audio Inspector. Silence Cutting in our Audio Inspector As you may have seen already in the examples, the cut segments are displayed as checked grey areas in the Audio Inspector. This is how they will be displayed by default. Cut segments are displayed as checked gray areas by default. During playback, the Audio Inspector will automatically skip these cut segments on the master track and play the silence segments if you activate the input track. This way, you can check each segment that was cut. It's also possible to hide the cut segments. The following picture shows you how to do that. You can also hide cut segments. First, you must click onto the "?" icon in the bottom right corner to show the Audio Inspector options. Then you must toggle the "Silence Cut Region" switch. After that, the cut regions are not displayed anymore, and also won't be played back on the input track anymore. If you click onto "Show Stats", you can activate the audio processing statistics. These statistics show you how much of your audio was cut, as well as the resulting track length after the cuts were applied. The processing statistics tell you more about how much audio was cut. Conclusion With our automatic silence cutting feature, we went one step further towards the perfect audio assistant. Audio editing has been a tedious-but-necessary task for a long time, but with our automatic silence cutting feature, we just made it easier for you! Feel free to send us your feedback - how do you like our new feature? Also, make sure to follow us, as we will release more automatic cutting algorithms for you in the future! Full Article Audio News
c Automatically generate Shownotes, Summaries and Chapters from Recordings By auphonic.com Published On :: Mon, 12 Jun 2023 13:03:18 +0000 We're thrilled to introduce our Automatic Shownotes and Chapters feature. This AI-powered tool effortlessly generates concise summaries, intuitive chapter timestamps and relevant keywords for your podcasts, audio and video files. See our Examples and the How To section below for details. Why do I need Shownotes and Chapters? In addition to links and other information, shownotes contain short summaries of the main topics of your episode, and inserted chapter marks allow you to timestamp sections with different topics of a podcast or video. This makes your content more accessible and user-friendly, enabeling listeners to quickly navigate to specific sections of the episode or find a previous episode to brush up on a particular topic. Shownotes are also very likely to boost your show's Search Engine Optimization and eventually its popularity, leading to an increase in listeners. However, especially structuring the content and finding useful positions for chapter marks is a very time-consuming process, that can be fully automated with our new feature. Besides the obvious use of creating shownotes and chapters for podcasts, you can also use our new feature to easily generate an abstract of your lecture recording, take the summary of your show as the starting point for a social media post, or choose your favourite chapter title as the podcast name. What happens behind the Scenes? When the Automatic Shownotes and Chapters feature is selected, the first step is speech transcription by either our internal Auphonic Whisper ASR or any integrated External ASR Service of your choice. Some open source tools and ChatGPT will then summarize the ASR resulting text in different levels of detail, analyze the content to identify sections with the different topics discussed, and finally complete each section with timestamps for easy navigation. Beginning with the generation of a Long Summary, the number of characters is further reduced for a Brief Summary and from the brief summary a Subtitle and some Keywords for the main topics are extracted. Depending on the duration of the input audio or video file, the level of detail of the thematic sections is also slightly adjusted, resulting in a reasonable number of chapters for very short 5-minute audio files as well as for long 180-minute audio files. How to automatically generate Shownotes and Chapters in Auphonic If you are a paying or beta user, you can automatically generate shownotes and chapters by checking the Automatic Shownotes and Chapters Checkbox in the Auphonic singletrack or multitrack Production Form with any of our ASR Services enabled. Once your production is done, the generated data will show up in your transcript result files and in the well-known Auphonic Transcript Editor above the speech recognition transcript section. By clicking on a chapter title in the Chapters section of the transcript editor, you can jump directly to that chapter in your transcript to review and edit that section. Unless you have manually entered content before, the generated data will also be automatically stored in your audio files' metadata as follows: Generated Long Summary stored in metadata field Summary. Generated Subtitle stored in metadata field Subtitle. Generated Keywords stored in metadata field Tags. Generated Timestamps for thematic sections stored as Start Time of Chapters Marks. Generated Headlines for thematic sections stored as Chapter Title of Chapters Marks. The metadata is automatically displayed with your audio file wherever you import your audio for further editing. Please note that not all of our supported Output File Formats are designed to use metadata. For details see our previous blog posts: ID3 Tags Metadata (used in MP3 output files), Vorbis Comment Metadata (used in FLAC, Opus and Ogg Vorbis output files) and MPEG-4 iTunes-style Metadata (used in AAC, M4A/M4B/MP4 and ALAC output files). Example As a real-life example, we automatically generated shownotes and chapters for the Lex Fridman Podcast #367: "Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI". Check out our transcript and generated shownotes: LexFridmanPodcast367-transcript.html Conclusion The automatic generation of shownotes and chapters is a huge time-saver for podcasters and video creators, as it speeds up the tedious process of manually structuring and summarizing your content. For now it is available for all paying or beta users. If you would like to become a beta user, or have any questions or feedback, please do not hesitate to contact us! Full Article Development News
c Improve your Audio with our new Automatic Filler Word Cutter By auphonic.com Published On :: Wed, 04 Oct 2023 08:44:33 +0000 We all know the problem: the content is perfectly prepared, and everything is in place, but the moment you hit the record button, your brain freezes, and what pops out of your mouth is a rain of “ums”, “uhs”, and “mhs” that no listener would enjoy. Cleaning up a record like that by manually cutting out every single filler word is a painstaking task. So we heard your requests to automate the filler word removing task, started implementing it, and are now very happy to release our new Automatic Filler Cutter feature. See our Audio Examples and Usage Instructions below. What is removed? While the definition of filler words is not the same, depending on who you ask, some words can be used as filler as well as content. For example, “like”, “well”, “you know”, etc. cannot be removed without the risk of removing also content and destroying sentences, even if those words are used as filler words in some cases. Therefore, we decided to focus on the removal of the obvious fillers, namely any kind of “ums”, “uhs”, “mhs”, German “ähm”, “äh”, “öh”, French “euh”, “euhm” and similar. Audio Examples 1. English Male Speaker The first audio example is an excerpt from the interview “From Racing Failure to Red Bull Champion: The Untold Christian Horner Story”. Our algorithm found and removed a remarkable ten filler words in this 45-second snippet: Screenshot of the Auphonic Audio Inspector: each pale red shaded area corresponds to a cut-out filler word. Original: Cut: 2. Austrian-German Female Speaker The following example is an interview with the Austrian Ex-Foreign Minister, Karin Kneissl, who uses seven filler words within 26 seconds: Original: Cut: Usage Instructions To use the Auphonic Automatic Filler Cutter feature, you just have to create a production or preset as you are used to and select “Cut Fillers” for “Automatic Cutting” in the section “Audio Algorithms”: When your production is done, all cut-out filler words will appear as pale red shaded areas in the Auphonic Audio Inspector on the production status page, as you can see in the upper screenshot of the Audio Inspector. If you want to remove silent segments from your audio as well, please also enable our Automatic Silence Cutting feature. NOTE: Our Automatic Cutting features (for filler and silence) are not available for video files! Behind the Scenes For the training of our Automatic Filler Cutter AI-Algorithm, we created datasets that contain manually labeled audio files, collected from 'real world' audio data. So far, we have labeled, trained, and tested the system with English, German, Spanish, and French data. However, in the Auphonic Web Service, you can activate and test the Automatic Filler Word Cutter for all languages. We would be very happy to hear how the filler removal works out for completely different-sounding languages from, e.g., the Asian, African, or Slavic language families. Please send us feedback on any problems or error patterns you discover! This will help us generate specific data for the training to improve the algorithm and eliminate your problems. Conclusion Automatic filler word cutting is a powerful tool for podcasters looking to enhance the quality of their content. It boosts clarity and professionalism, all while making your editing process more efficient. Some users, however, see a touch of authenticity in filler words within podcasts. So, we leave it up to you to enable or disable the Automatic Filler Cutter feature for your next Auphonic production, depending on your desired style. We are currently working on filler word cutting optimizations for more languages, so watch our channels to get all the news on our upgrades! If you have any feedback for us – how the filler cutter is working in your language, what you do or don't like, what you miss, what else you would want to remove from your audio besides silence and filler words, etc. – you are welcome to contact us via email or directly comment on our production interface! Full Article Audio Development News
c Export Cuts for Audio/Video Editors and Filler Cutting Updates By auphonic.com Published On :: Wed, 15 Nov 2023 14:09:43 +0000 For those of you who like to be in control of every applied cut, we are introducing an update for our Automatic Silence and Filler Word Cutting Algorithms today: The export of Cut Lists allows you to import cuts into your favorite audio/video editor to check and apply the cuts to your files manually. Thanks to your great feedback, we were able to update our “Filler Word Cutting” algorithm as well. Cut Lists Export We now provide the export of various formats of “Cut Lists” in the Auphonic Web Service. You can use these formats to modify and apply cuts in your post production audio or video editing program: Screenshot of the Reaper DAW with imported “ReaperRegions.csv”: pale red regions display filler words and pale gray regions mark silence. To generate the cut lists, choose “Cut List (Filler and Silence)” in the “Output Files” section. By selecting the correct “Ending” you can export the cut lists to a format that is suitable for your post production audio or video editor: We support regions and EDL (edit decision list) formats: Region formats (“AudacityRegions.txt”, “AuditionRegions.csv” and “ReaperRegions.csv”) list filler and silence cut regions with their respective start and end timestamps. You can import those cut region lists into your existing Audacity, Audition, or Reaper projects. There you can edit the cut positions to your liking by deleting, moving, or adding individual cut positions. After that, you may apply the cuts manually. EDL (edit decision list) formats, “SamplitudeReaper.edl”, “FinalCutProLegacy.xml”, “FinalCutProX.xml” and “DaVinciResolve.edl” (also known as “cmx3600” format and compatible with various editors), can be imported in your software to automatically apply cuts from the EDL file. Depending on the software you use, you may also edit individual cuts manually before applying them. Be aware though that in some editors it may not be possible to add them to an existing project. If this is the case, simply open a new project with the EDL file instead. Please just let us know if we do not yet provide a cut list format for your favorite audio or video editing program! Export Uncut Audio To only detect fillers/silence but NOT cut them, choose the “Export Uncut Audio” mode for “Automatic Cutting” in “Audio Algorithms”. Then you can query cutting times via our API or select your favorite Cut List format as “Output File” to only export cut times for later editing, but without applying cuts during the Auphonic audio post production. Using the option “Export Uncut Audio”, it is also possible to use our cutting algorithms for video productions. In this case, we will just detect and report cuts, and you can import and apply them in your favorite video editing program afterwards. Filler Word Cutting Updates Thanks to your great feedback, we've been able to further optimize our Automatic Filler Word Cutting algorithm. By now you probably know that our filler word cutter removes all types of “uhm”, “uh”, “mh”, “ähm”, “euh”, “eh” and similar sounds. If you haven't heard about it yet, our release blog post might be a good starting point. Enhanced Fade Curves We noticed that due to reverb or background noise in input files, some filler cuts did not sound as smooth as they should have. To prevent such rough cuts, we trained our filler detection models with more diverse speech data and compared different fade curve approaches. Now the filler detection is much more robust against noise. Additionally, we developed an asymmetrical fade solution, which, in our opinion, makes the cuts almost inaudible. We nonetheless recommend, though, that you use filler cutting in combination with our Dynamic Denoising or Speech Isolation algorithms. These algorithms remove heavy reverb or noise and thus prevent noise artifacts during cutting. Handling of Affirmative Sounds Some of you were bothered by the fact that affirmative sounds, like “mhm”, “aha”, and “yeah” were cut along with filler words. Driven by your feedback, we tweaked our filler detection algorithm in a way that will remove considerably fewer of these sounds. Multitrack Filler Silencing We also received requests to optimize filler cutting for crosstalk in Multitrack productions. Let's assume that two speech tracks are simultaneously active and one of the two tracks contains a filler word. In this case, we can not cut the filler word in one track without destroying parts of the speech in the other track. Our original solution was to skip such a filler. Now, however, we replace it with silence instead. This removes the filler words while preserving all the speech content. Feedback If you want to provide us feedback on how you like the updated filler cutting algorithm or about any other features that are missing, you are very welcome to contact us. You can also leave a comment in the feedback section on the status page of your concrete production. Full Article Audio News
c New Auphonic Transcript Editor By auphonic.com Published On :: Thu, 21 Mar 2024 12:31:22 +0000 We're excited to roll out an upgraded version of our Transcript Editor, focusing on enhancing your transcription workflow and making it more intuitive, especially for mobile users. This overhaul introduces several key improvements and features designed to streamline the transcription process. Click here for a Live Demo What's new? Line by Line Editing Your transcript is being rendered line by line. This allows for precise editing of every single timestamp. Depending on the speech recognition engine editing can be done on word or phrase level. For optimal results, we suggest utilizing our Auphonic Whisper ASR engine. A paragraph with 9 lines, every line represents a "subtitle line" (.vtt, .srt). You can split or combine paragraphs and lines using the Enter and Backspace keys. Our new Playback Slider enables seamless scrolling through the text, while we highlight the currently selected word as you go. With the switchable Play on Click function you can start your playback from anywhere in the transcript. Automatic Shownotes and Chapters If you enable Automatic Shownotes and Chapters in the Production form, we include AI generated shownotes and chapters directly into the Transcript Editor. You can edit Chapter Times and Text directly within the Transcript Editor. Once you click Save (top right), any modifications made within the shownotes and chapters will also be saved back to the production. Screenshot of Automatic Shownotes and Chapters withing the Transcript Editor. You are also able to edit chapter times directly within the transcript editor. Please note that this only works within the Transcript section of the editor to ensure precise placement of chapters. Screenshot of Edit Chapter Time. Local History: Undo and Redo Our Local History feature offers convenient undo and redo functionality. This means you can effortlessly revert changes or redo them as needed, providing you with greater control and flexibility during the editing process. Edit Speakers Our revamped Transcript Editor automatically assigns speakers in Multitrack Productions. You can use the Track Identifier in our production form to assign speakers and easily edit, remove, or add new ones within the Transcript Editor. So it's clear who says what at any time! Screenshot of Edit Speaker. Confidence Highlighting Within the transcription areas of low confidence are highlighted with our Highlight Low Confidence feature. So you can check up on the AI and edit words it is not entirely confident about! It also shows you how confident the AI is about the specific area if you hover over it. Offline Mode This feature enables you to download and share the Transcript Editor for offline editing. This means you can share the *.html file with someone else for editing purposes. Simply use the Download Editor action to obtain an offline version of the Transcript Editor. Export Transcript You can export your transcript in all currently relevant formats (.srt, .vtt, .txt, .html, .pdf) and include them in your publishing process or video editing software. Screenshot of Export Transcript. Responsive Design We want to make podcasting as easy as possible. The responsive design of our Transcript Editor allows for fast editing on mobile devices - so you can edit what you're saying anywhere, anytime! Screenshot of Transcript Editor on a mobile device. Why use it? Apple Podcasts paved the way for all podcasts to feature transcripts, so users can read through what you are saying and look for specific parts of your talk they're interested in revisiting. That could be book recommendations, advertisements or one of your ideas you want to share with the world. We're trying to make the transcription process as automated and easy for you as possible. Especially when using Multitrack Productions we strive to take all the work off of you and let the AI do its magic. Additionally, we aim to simplify manual corrections wherever automated processes encounter challenges, ensuring that even these adjustments are as effortless as possible. Try it now on auphonic.com! Feedback If you have feature requests or feedback for our new Transcript Editor, please let us know! You can also leave a comment in the feedback section on the status page of your specific production. We're looking forward to hearing from you! Full Article Development News
c Independently control Noise, Reverb and Breath Reduction Amounts By auphonic.com Published On :: Thu, 16 May 2024 13:20:55 +0000 Responding to your feedback, we are now proud to present new separate parameters for noise, reverb, and breath reduction to give you more flexible control for your individual, best output results. Find all the new parameters below and listen to the Audio Examples to get a closer impression of the upgrade. What's the update about? Before Previously, you could only set the Denoising Method and one reduction amount, that was used for all elements. Depending on the selected method, you were already able to decide whether music, static, or changing noises should be removed, but there was no setting to keep the typewriter sound effects while removing the reverb, for example. Now With our latest upgrade, you can now set the reduction amounts separately for noise, reverb, and breathing sounds. For example, you could completely remove the background noise while reducing the reverb just a little to enhance speech intelligibility but keep the atmosphere. Like we did in Audio Example 1. Many of you have also asked about the possibility of slightly reducing breath sounds rather than eliminating them completely. In Audio Example 2 we demonstrate how you can prevent your audio from sounding strange and unnatural by reducing instead of eliminating all breathing sounds. To all of you who are happy with the results and don't want anything to change, relax: If you don't change the default settings, the noise reduction algorithms work exactly the same as before. Note: As the 'Static Denoiser' removes only stationary noise, there are no 'Remove Reverb' and 'Remove Breathings' parameters available for this denoising method. New Parameters Screenshot of the new Noise Reduction Parameters in the production form. In order to use the new noise reduction features, you may separately set the following parameters: Denoising Method: (unchanged) Select what kind of noise you want to remove. [Dynamic Denoiser (default), Speech Isolation, Static Denoiser] Note that the parameters 'Remove Reverb' and 'Remove Breathings' are NOT available for Static Denoiser! Remove Noise: Select the amount of noise you want to remove. [100 dB (default), Disable Denoise, 3 dB, 6 dB, ..., 100 dB (full)] Remove Reverb: Select the amount of reverb you want to remove. [100 dB (default), Disable Deverb, 3 dB, 6 dB, ..., 100 dB (full)] Remove Breathings: Select the amount of breathings you want to remove. [Off (default), 3 dB, 6 dB, ..., 100 dB (full)] Feel free to experiment with all the options to find your preferred parameter settings! Editing and reprocessing existing productions does not cost any additional credits as long as you don't change the input file. Listen to the results: 1. Reverb reduction with full noise elimination For the first audio example by conduitministries.com we set the 'Remove Noise' amount to 100 dB (full) and varied the 'Remove Reverb' amount starting from 0 dB (Off) to 12 dB (medium) and to 100 dB (full). Listen to how first the noise is gone and then step by step the reverb is lower: Original -100dB Denoise -0dB Deverb -100dB Denoise -12dB Deverb -100dB Denoise -100dB Deverb 2. Breathing sound reduction In the breathing reduction audio example by LibriVox.org we used the 'Remove Breathing' amounts increasing from the original audio with 0 dB (Off) to 12 dB (medium) and to 100 dB (full) reduction. In the result files you can hear, that the 100 dB (full) elimination leads to weird, unnatural-sounding pauses, that can be prevented by just reducing the breathing sounds: Original -12dB Debreath -100dB Debreath Try it now on auphonic.com! Feedback We hope you like our upgraded version of the Noise Reduction Algorithms with new parameters for more control. If you have more feature requests or feedback for us, please let us know! You can also leave a comment in the feedback section on the status page of your specific production. We're looking forward to hearing from you! Full Article Audio Development News
c Speaker Identification in Single Track Productions By auphonic.com Published On :: Mon, 05 Aug 2024 09:34:04 +0000 We're excited to announce the release of our new speaker diarization feature! With our latest update, we can now detect individual speakers within a single audio recording, allowing users to see precisely who said what and when. This information is then reflected in the transcript. Previously, this feature was only available in Multitrack Productions. What is Speaker Diarization? Speaker diarization is the process of identifying different speakers and segmenting an audio file accordingly, making it easier to accurately determine who said what. This is particularly useful in scenarios like meeting recordings, interviews, and podcasts involving multiple speakers. What's New Until now, Auphonic users had to use our Multitrack Production Workflow to gain access to speaker identification. With our recent update, this feature is now also accessible for single track productions, thereby further simplifying our workflow. How It Works When selecting Auphonic Whisper ASR as your speech recognition engine on our production page, the option to use “Speaker Detection” appears. You can select the number of speakers in the recording yourself, or have the AI identify and label them automatically. Select "speaker detection" in the speech recognition menu The identified speakers will then be shown next to what they said in the captions of our audio player and within the generated transcript. We know exactly who is saying what at any given time. If you need to change speakers' names, you can edit them in our Transcript Editor. By changing the name of the speaker, the changes will be applied automatically within the whole transcript. Click here for an example transcript to play around with! The transcript editor also lets you edit the generated transcript's content and correct any mistakes. This gives our users the final say in quality control. For more information on how to use our transcript editor, click here! Conclusion With our newly released speaker detection feature for single track productions, we simplified the automatic transcription process even further. You can drop a single audio file into our algorithms, the AI automatically detects and transcribes its contents and attributes speakers correctly. And if the model should unexpectedly be wrong at any time, you can correct it in our Transcript Editor. Feedback We're looking forward to having you utilize our new speaker diarization feature! Please, feel free to send us feedback through our Contact Form or submit it directly in the status page of your production. Happy detecting! Full Article Audio News
c Introducing Auphonic Team Accounts By auphonic.com Published On :: Wed, 14 Aug 2024 09:11:04 +0000 The new Auphonic team accounts streamline billing for companies, teams, or groups and are available to business users. A team account's owner can invite team members and admins, all of whom share the team account's credits for their productions. The team account feature only impacts billing. Team members cannot access each other’s productions, presets, etc. If you're interested in switching to a team account, please contact us! What are Auphonic Teams? A team consists of an Owner with Members and optional Admins. The Owner and Admins can add and remove team members as well as change their roles in the team. Every added account can use Auphonic as usual, but only the account Owner's credits are used for billing productions. This simplifies the invoicing process for teams using multiple Auphonic accounts. One Owner provides the credits, and all of the team can use them. Overview of the team accounts section in the account settings. Inviting Users After gaining access to the team account feature, you can invite new team members via the team account section on the account settings page. Invited team members can be: Existing Auphonic users New users who do not yet have an account When inviting users, you can assign them a role: Member or Admin. Sent invitations can be cancelled at any time. Managing Team Account Members The team account Owner and Admins can remove members and change roles within the team. Editing roles Roles and Permissions Within the team, there are three roles: Owner, Admin and Member. Each role has specific permissions: All roles and permissions. Note, that the Owner role - held by the account creator - cannot be changed manually. Please contact us if you want to change the owner of a team account! Billing The Owner of the team account is responsible for purchasing credits. Productions made by team members will use the team account's credits. As a team account member, you will see the team account owner's credit balance instead of your own. If you wish to access your personal credits, you can leave the team at any time. Conclusion With the introduction of team accounts, we provide a solution to simplify the production process for teams. One account is responsible for providing the credits and all team members can use them collectively! Contact us if you are a business user and interested in using our new feature. Feedback We are always happy to receive feedback on our features! If you have thoughts, ideas or recommendations, drop us a line using our contact form or email us directly. We're looking forward to hearing from you! Full Article News