on

Female athletes may be more susceptible to concussion and have prolonged symptoms after concussion

Female athletes are more susceptible to sport-related concussions (SRCs) and experience worse outcomes compared with male athletes. Although numerous studies on SRC have compared the outcomes of concussions in male and female athletes after injury, research pertaining to why female athletes have worse outcomes is limited.




on

Historic Review of Repetitive Head Impacts and Chronic Traumatic Encephalopathy

Chronic traumatic encephalopathy (CTE) is a neurodegenerative disease associated with a history of repetitive head impacts (RHI). To better understand the strength of evidence underlying the possible causal relationship between RHI and CTE, we examined the medical literature through the Bradford Hill criteria for causation.




on

Serious Brain Trauma, Chronic Traumatic Encephalopathy (CTE), Found in Young Athletes

In this case series of 152 contact sport athletes younger than 30 years at the time of death, chronic traumatic encephalopathy (CTE) was found in 63 (41.4%), with nearly all having mild CTE (stages I and II). 




on

WPCandy Roundtable #3: WordPress Meetup Discussion with 4 co-organizers

Let’s talk about WordPress meetups! On this episode of The WPCandy Roundtable Podcast, I asked four awesome WordPress meetup co-organizers to chat about what makes a quality meetup and how they do what they do. We also went over what they would recommend to new WordPress meetup organizers. Joining me on the podcast are Angie […]

The post WPCandy Roundtable #3: WordPress Meetup Discussion with 4 co-organizers appeared first on WPCandy.




on

The Story of WonderThemes: Why starting a WordPress theme marketplace isn’t easy

Michael Kimb Jones — or Kimb, as he prefers people call him — sat waiting for his time to stand up and speak. There he sat, on the ground floor of Surgeons Hall at the Royal College of Surgeons, running over the things he was about to talk about. His presentation was for WordCamp Edinburgh in […]

The post The Story of WonderThemes: Why starting a WordPress theme marketplace isn’t easy appeared first on WPCandy.




on

Review: Professional WordPress Design and Development, 2nd Edition

Let’s start this review with an honest admission. While in the 90s I collected (and read, mostly) tech books like they were going out of style. These days I barely get my hands around a book, let alone consume it. Frankly, I don’t have the time. I have three kids (two are twins) and a […]

The post Review: Professional WordPress Design and Development, 2nd Edition appeared first on WPCandy.




on

An update on The WPCandy Quarterly #2!

I’m happy to announce that we’re finalizing the second issue of The WPCandy Quarterly, in the last stage before sending it off to the printers. It’s an exciting — but busy! — time, and we’re most psyched to be so close to getting another issue into everyone’s hands. We have an amazing roundup of contributors […]

The post An update on The WPCandy Quarterly #2! appeared first on WPCandy.




on

Dolphins QB Tua Tagovailoa still faces concussion concerns

IT WAS GETTING late the night of Sept. 12, and Dr. Julian Bailes had settled in to watch a little football. The renowned neurosurgeon and concussions expert tuned in just as Miami Dolphins quarterback Tua Tagovailoa sustained yet another head injury. There was Tagovailoa, lying on the ground with his arms outstretched and his fingers curled into the fencing response, an involuntary position that occurs after a brain injury. To Bailes, it looked almost like a replay from a previous Thursday night game, Sept. 29, 2022, with Tagovailoa on his back, his fingers splayed in front of his face in another automatic response to a concussion.




on

New Study Reveals Hidden COVID Proteins in Blood of Long Haulers

A study conducted by Harvard-affiliated Brigham and Women’s Hospital revealed that individuals experiencing a broad array of long COVID symptoms are twice as likely to have traces of SARS-CoV-2 proteins in their blood compared to those without symptoms of long COVID. The symptoms frequently associated with long COVID include fatigue, brain fog, muscle and joint pain, back pain, headaches, sleep issues, loss of smell or taste, and gastrointestinal problems.These findings were published in the journal Clinical Microbiology and Infection.




on

This common gesture could be a sign of a concussion, researchers say

A quick shake of the head after a hard hit could signal that a person has a concussion, a new study suggests, based on the experiences of young athletes. It’s an easily recognizable movement that could help significantly reduce the number of concussions that go undiagnosed if added to official evaluation guidelines, according to researchers from Mass General Brigham and the Concussion Legacy Foundation.




on

Tagovailoa rejects calls to retire after 3rd official concussion of his NFL career

Miami Dolphins quarterback Tua Tagovailoa is rejecting calls to retire after the third official concussion of his NFL career. 




on

N.C. State quarterback Grayson McCall retires from football at 23 after concussions

North Carolina State quarterback Grayson McCall is retiring from football, he announced Wednesday after sustaining a head injury during a game earlier this month. "Brain specialists, my family, and I have come to the conclusion that it is in my best interest to hang the cleats up," he continued. "I have done everything I can to continue, but this is where the good Lord has called me to serve in a different space."




on

A year after Maine mass shooting, gunman's family wants action on brain injury research in military

The family of the Army reservist who committed the October 2023 massacre in Lewiston wants to bring awareness to traumatic brain injuries among military members.




on

Western researchers closing in on treatment for long COVID 

Western University researchers are taking a unique global approach to finding an effective treatment for people living with long COVID. Led by Schulich School of Medicine & Dentistry professor Dr. Douglas Fraser, the projects represent the first multi-continental research conducted on long COVID, with study sites in Africa and North and South Americas. The long-term effects of long COVID – including symptoms such as brain fog, fatigue and difficulty breathing – can be debilitating and greatly impact quality of life. At least 65 million people around the world are affected by the condition. 




on

US Air Force veteran finds hope in suicide prevention therapy focusing on stress responses

For many people, it’s a ray of hope rooted in research. Recent research is presenting a new way of thinking about suicide prevention, and both experts and patients believe it’s already saving lives.




on

American College of Surgeons Releases Revised Best Practices Guidelines in Management of Traumatic Brain Injury

The American College of Surgeons (ACS) has announced the release of its revised Best Practices Guidelines in the Management of Traumatic Brain Injury, offering healthcare providers comprehensive strategies to improve the care and outcomes of patients with traumatic brain injury (TBI).




on

‘The legacy I want to leave': How a Maine teen is helping with concussion research

A high school senior from Bangor, Maine, has decided that when the time comes, he will donate his brain to science. Fionn Parker-Cummings is the youngest person to donate his brain to the University of Pittsburgh National Sports Brain Bank. Researchers will check in with him every year for the rest of his life. Parker-Cummings said he hopes the contribution will advance traumatic brain injury research to protect players like NFL quarterbacks Drake Maye and Tua Tagovailoa, who have recently suffered concussions.




on

The VA is undeterred on psychedelics

The Department of Veterans Affairs remains convinced that psychedelic therapy can help treat veterans with post-traumatic stress disorder — and it’s going to try to prove it. That commitment comes despite the FDA’s August rejection of a drug company’s plan to offer a psychedelic regimen for PTSD.




on

Blood Test Can Benefit NFL Concussion Return-to-Play Decisions

If professional athletes are meant to be real-life superheroes, Miami Dolphins star Tua Tagovailoa looked remarkably mortal on September 12. With his team down 31-10 in the third quarter against the Buffalo Bills, the quarterback spotted a gap at the line of scrimmage and decided to scramble for a first down. When he encountered Bills' defender Damar Hamlin, Tagovailoa lowered his head and ran straight into him. Tagovailoa stayed down injured after the play. He had suffered the third concussion of his NFL career, on top of the one he was diagnosed with during his time in the collegiate ranks.




on

Recognizing traumatic brain injury as a chronic condition fosters better care over the survivor's lifetime

A commentary, published in the Journal of Neurotrauma, calls for traumatic brain injury to be recognized as a chronic condition as are diabetes, asthma, depression and heart failure. To provide comprehensive care for traumatic brain injury throughout individuals' lifespans, the authors propose that coordinated care models they and others have developed, tested and applied to various populations—including older adults, individuals living with depression and post-intensive care unit survivors—be adapted to improve communication and integration between brain injury specialists—including physical medicine and rehabilitation clinicians—and primary care physicians, fostering better long-term patient care for traumatic brain injury survivors and more support for both patients and their families.




on

Chronic Brain Trauma Is Extensive in Navy’s Elite Speedboat Crews

The pounding that sailors’ brains take from years of high-speed wave-slamming in the Special Boat Teams can cause symptoms that wreck their careers — and their lives.




on

5+ Best WordPress Plugins to Try Out [2024 edition]

The WordPress directory has 50k+ free plugins that were developed so you can extend a WordPress website with functions and features far beyond your imagination. Additional thousands of WordPress plugins are available from various marketplaces. No matter how much you might have a need for a specialized plugin or two, would you really want to […]




on

Testing Brain Injury-Related Vision Issues with People Who Can’t Communicate

How do you test the vision of someone with TBI who cannot communicate — talk or respond otherwise to questions?




on

A Brain Injury Does Not Mean a Person Loses His Civil Rights

The judicial system is very reluctant to deprive any individual of the freedom to make important financial and personal decisions.




on

Changes in Taste, Smell, and Hormones After Brain Injury

How and why does a TBI change a person’s ability to taste and smell, and cold hands and feet, too?




on

It is Never Too Late to Get Rehabilitation for Brain Injury

A child with a brain injury may develop more injury-related issues as an adult.






on

Susan Meiselas awarded Sony World Photography Awards’ Outstanding Contribution to Photography 2025

In a significant recognition of her pioneering role in documentary photography, Susan Meiselas has been awarded the 2025 Outstanding Contribution to Photography by the Sony...

The post Susan Meiselas awarded Sony World Photography Awards’ Outstanding Contribution to Photography 2025 appeared first on DIY Photography.



  • news
  • Sony World Photography Awards



on

Auphonic Joins NVIDIA Inception

We are proud to announce that we recently joined the NVIDIA Inception Program, which will help to speed up our deep learning development process and therefore offer the best possible audio processing tools to our users.

What is NVIDIA Inception

NVIDIA is a global leader in hardware and software for Artificial Intelligence (AI).
Their NVIDIA Inception Program will enable us to leverage NVIDIA's cutting-edge technology by accessing more diverse cloud and GPU (Graphics Processing Unit) product offerings, which are used in most Machine Learning and Deep Learning model training instances worldwide. This will allow us to streamline AI development and deployment and train bigger machine-learning models to test and evaluate algorithms faster. The program will also offer us the opportunity to collaborate with industry-leading experts and other AI-driven organizations, among other things.

Our Deep Learning Development Process

For our development process, more GPU capacity means a great saving of time and therewith of course a saving of costs. As an example, one training cycle of our dynamic denoiser model takes almost a week trained with GPUs, however the same training cycle trained with CPUs would take several months.

To illustrate, a CPU (Central Processing Unit) can be compared to a race car, which is very fast but can only transfer a small number of packages, while a GPU in this comparison is a big truck, which can transfer a huge number of packages more slowly. Deep learning algorithms require for training very large datasets consisting of thousands of files, therefore our 'trucks', the GPUs, are the best hardware to choose processing multiple computations simultaneously.

The more GPU capacity we can use, the faster we get results for our tested algorithms, and the faster we know which way we should follow to offer our users the best possible audio processing tools.
Unfortunately, the world is right in the middle of a Global Chip Shortage, so the latest GPUs are very hard to get and super expensive to purchase – unless you have a partnership with a GPU manufacturer.

Conclusion

We are happy to join such a renowned program and look forward to the updates to our product that we will be able to implement and potentially a greater industry transformation.

You can read our full press release here: AuphonicNVIDIAInceptionPressRelease (pdf)







on

Auphonic Speech Recognition Engine using Whisper by OpenAI (Beta)

Today we release our first self-hosted Auphonic Speech Recognition Engine using the open-source Whisper model by OpenAI!
With Whisper, you can now integrate automatic speech recognition in 99 languages into your Auphonic audio post-production workflow, without creating an external account and without extra costs!

Whisper Speech Recognition in Auphonic

So far, Auphonic users had to choose one of our integrated external service providers (Wit.ai, Google Cloud Speech, Amazon Transcribe, Speechmatics) for speech recognition, so audio files were transferred to an external server, using external computing powers, that users had to pay for in their external accounts.

The new Auphonic Speech Recognition is using Whisper, which was published by OpenAI as an open-source project. Open-source means, the publicly shared GitHub repository contains a complete Whisper package including source code, examples, and research results.
However, automatic speech recognition is a very time and hardware-consuming process, that can be incredibly slow using a standard home computer without special GPUs. So we decided to integrate this service and offer you automatic speech recognition (ASR) by Whisper processed on our own hardware, just like any other Auphonic processing task, giving you quite some benefits:

  • No external account is needed anymore to run ASR in Auphonic.
  • Your data doesn't leave our Auphonic servers for ASR processing.
  • No extra costs for external ASR services.
  • Additional Auphonic pre- and post-processing for more accurate ASR, especially for Multitrack Productions.
  • The quality of Whisper ASR is absolutely comparable to the “best” services in our comparison table.

How to use Whisper?

To use the Auphonic Whisper integration, you just have to create a production or preset as you are used to and select “Auphonic Whisper ASR” as “Service” in the section Speech Recognition.
This option will automatically appear for Beta and paying users. If you are a free user but want to try Whisper: please just ask for access!

When your Auphonic speech recognition is done, you can download your transcript in different formats and may edit or share your transcript with the Auphonic Transcript Editor.
For more details about all our integrated speech recognition services, please visit our Speech Recognition Help and watch this channel for Whisper updates – soon to come.

Why Beta?

We decided to launch Whisper for Beta and paying users only, as Whisper was just published end of September and there was not enough time to test every single use case sufficiently.
Another issue is the required computing power: for suitable scaling of the GPU infrastructure, we need a beta phase to test the service while we are monitoring the hardware usage, to make sure there are no server overloads.

Conclusion

Automatic speech recognition services are evolving very quickly, and we've seen major improvements over the past few years.
With Whisper, we can now perform speech recognition without extra costs on our own GPU hardware, no external services are required anymore.

Auphonic Whisper ASR is available for Beta and paying users now, free users can ask for Beta access.
You are very welcome to send us feedback (directly in the production interface or via email), whether you notice something that works particularly well or discover any problems.
Your feedback is a great help to improve the system!







on

New Auphonic AutoEQ Filtering (Beta)

In addition to our Leveler, Denoiser, and Adaptive 'Hi-Pass' Filter, we now release the missing equalization feature with the new Auphonic AutoEQ.
The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording, to remove sibilance (De-esser) and to create a clear, warm, and pleasant sound - listen to the audio examples below to get an idea about what it does.

Screenshot of manually adjusted example settings for the equalizer plug-in 'Pro-Q3' by fabfilter.

What is Equalization and why is it difficult?

Equalization (EQ) in audio recording and reproduction is the process of adjusting the volume of different frequency bands within a signal.
The following vocal EQ cheat sheet, published by Producer Hive, will give you a small impression, of what can be influenced by equalizing:

Vocal EQ Cheat Sheet by Producer Hive.

On the other hand, it is very easy to ruin a good voice recording with manual heavy-handed equalization, resulting in voices, that sound very sharp or muddy or even like the speaker had a blocked nose.
Besides the skill and experience of an audio engineer, manual adjustments of frequencies also require a very good and linear studio playback device. For example, performing manual equalization with strongly bass-heavy speakers would most likely lead to a very sharp, unpleasant listening experience using treble-heavy headphones.

For singletrack productions with more than one speaker, equalizing is also a very complex and time-consuming process, as every voice has its unique frequency spectrum and needs its own equalization. One could separate speakers with cuts or create a track envelope to fade from one speaker to another, however, any solution is a very tedious job, if you do it by hand.

That is where the Auphonic AutoEQ comes in! All those steps are now available in just one click!

How does the Auphonic AutoEQ work?

The Auphonic Web Service analyzes your audio content and classifies the audio file into small and meaningful segments like music, silence, different speakers, etc. to process every single segment with the best matching algorithms.
All our features like the Adaptive Leveler, Dynamic Denoising, Adaptive 'Hi-Pass' Filtering, and now the new AutoEQ filter option are built on top of this basic processing.

Using Auphonic AutoEQ, spectral EQ profiles are created for each speaker separately and permanently changing over time. The aim of those time-dependent EQ profiles is to create a constant, pleasant sound in the output file even if there are slightly changing voices in the record, for example, due to modified speaker-microphone positions.

Audio Examples

Here are two short audio examples, which demonstrate some features of our AutoEQ.
We recommend listening with headphones so you can hear all the details.

Example 1. Female Speaker with Background Music

In the following example (BCB: The Voices of Bainbridge Island) of a female narrator speaking while background music is playing, you can easily recognize quite sharp 'sss' sounds in the female voice. This sharpness in the female voice is removed by the so-called De-essing feature of the Auphonic AutoEQ, while the background music is not changed.

Original:
AutoEQed:

Example 2. Dialog of Male and Female Speakers

The next example (BCB: The Voices of Bainbridge Island) shows how the AutoEQ optimizes a singletrack record containing two speakers with different voice characteristics. Our AutoEQ algorithms analyze each voice separately and calculate the matching frequency adjustments to optimize the voice of every single speaker.

Original:
AutoEQed:

AutoEQ Beta Integration in the Auphonic Web Service

To use the Auphonic AutoEQ, you just have to create a production or preset as you are used to, toggle “Advanced Parameters” on the top right in the section “Audio Algorithms” and select “Voice AutoEQ” within “Filtering”:

For a first test period, the AutoEQ will only be available for Beta and paying users, to incorporate your feedback and finalize an optimized version.
If you are a free user but want to try Auphonic AutoEQ: please just ask for access!

Practical Tips

For best results using Auphonic AutoEQ, however, it is still necessary, that your audio content is of sufficiently good quality, as no equalizer can make up frequencies that are not there in the first place. Audio files with low bitrates often lack important frequencies, that cannot be recovered by equalizing. AutoEQ is just a feature to boost or cut individual frequency bands, not a bandwidth extension. For more information about required audio quality, see the former blog post: Audio File Formats and Bitrates for Podcasts.

Another important topic is the definition of the 'best result'. Equalizing is a very subjective task, that differs a lot depending on every personal opinion. So Auphonic AutoEQ is set up to follow quite conservative rules of equalizing and rather apply subtle tweaks and remove obvious problems, than support personal preferences. This also means, your record will experience no significant changes from Auphonic AutoEQ if it sounds reasonably OK or pretty good already.

Conclusion

Auphonic audio post production algorithms keep getting better and better in leaps and bounds lately, offering you new Beta Features: Beta Auphonic Denoiser, Beta Auphonic Speech Recognition, and Beta Auphonic AutoEQ.
Right now we are fine-tuning all our current Beta Features with high intensity to release a new upgraded version of our Auphonic Web Service as soon as possible.
Please watch this channel for further updates – soon to come.

If you have any feedback for us or want to become a Beta user, you are very welcome to comment directly in our production interface or to contact us via email!







on

New Auphonic Website, Free Advanced Algorithms and More Examples

To start a new decade of automatic audio post production with Auphonic, we are happy to launch a few updates:

New Website Design

Opening the new homepage today, you might have noticed that our website looked different from what you had been used to before. Keeping our customers’ feedback from last year in mind, we designed a new vision for Auphonic.

Our new website features a refreshed look with an improved, more engaging, and functional user experience. Moreover, a more straightforward, intuitive, and accessible navigation will give you a seamless workflow and a comfortable exploration of Auphonic’s features.
We hope it will be easier to explore the diversity of applications that Auphonic has. In the end, however, as before, you will have the same full functionality of Auphonic available to you and some extra features if you are using our paid packages or subscriptions.

Take a look yourself: New Auphonic Landing Page

Free Access to our Advanced and Beta Algorithms

In the past, only paying Auphonic users had access to the advanced algorithm parameters, to multitrack advanced audio algorithms, and to our Dynamic Denoising and AutoEQ beta models.

We now enabled all advanced algorithms for free users, and you can use them for 2 hours of audio free each month!

Using the Dynamic Denoiser, you can define whether Auphonic should remove only static or also fast-changing noises and if we should keep or eliminate music. For even greater speech intelligibility control, it is possible to manually adjust the amount of denoising to strike the perfect balance between clarity and ambiance.

The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording to remove sibilance (De-Esser) and to create a clear, warm, and pleasant sound.
The equalization of multi-speaker audio can be complex and time-consuming, as each voice requires its own unique frequency spectrum equalization. Our AutoEQ simplifies this process by creating separate, time-dependent EQ profiles for each speaker, ensuring a consistent and pleasant sound output despite any changes in the voices during the recording.

Our advanced algorithm parameters help you to meet all common audio specifications of platforms like Netflix, Audible, podcasts, broadcasters (EBU R128, ATSC A/85, radio and mobile, commercials) in one click. You can define a set of target parameters (integrated loudness, true peak level, dialog normalization, MaxLRA, MaxM, MaxS), like -16 LUFS for podcasts, and we will produce the audio accordingly.
In addition, they offer more control for multitrack productions and for the Adaptive Leveler.

We would like to give away free hours for new Auphonic users, to try out our free advanced algorithms. Please use this URL to register your new Auphonic account. the code is valid till end of March 2023 and will give you 5 extra production hours for the next month. Happy content creation!

More Audio Examples

There is no better way to experience Auphonic than hearing the difference our post production tool makes when applied to different types of audio and content.
We are happy to share that our new features page now contains some new audio examples you can listen to explore our web tool, and we will add even more examples in the next weeks.







on

Introducing Metadata Variables in Auphonic

We've listened to your feedback and are excited to announce the introduction of metadata variables in Auphonic for more advanced use of our Basic and Extended Metadata.
This new feature allows you to use metadata fields from your input files to automate workflows. You can easily reference any field by using { curly brackets } and typing the field name, such as {title}, {artist}, {album}, {track}, and more.
To get started, take a look at our Formatting Examples and the Table of all Variables to see all the available options.

Whether you are using the Auphonic Web Service or our API, metadata variables can be applied whenever metadata values are set. They are particularly helpful when working with Presets, Batch Productions, or Watch Folders!

For instance, consider the left column of the following table as input ending up in the metadata as shown in the right column:

Field Input Output
Album MyPodcast MyPodcast
Track 25 25
Title Episode No. {track} of {album}! Episode No. 25 of MyPodcast!

Note:
Please mind that fields can not refer to each other in cycles (e.g., if {title} refers to {album}, {album} may not refer to {title}). While the input form will not show any errors, the metadata will most likely not be correct and the production will generate a warning.

Formatting Examples

1. Generating Output File Basenames

With metadata variables you can automatically generate your output file basename based on the input filename and podcast metadata, like album, track, and title.
If you have, for example, a podcast preset or input file with the album name "The Placeholder Podcast", you can automatically name your episode title like your input filename. By combining the metadata variables "album" name and your generated episode "title" with any text patterns, like "denoised" in this case, you can create your individual output file basename:

Field Input Output
Album The Placeholder Podcast The Placeholder Podcast
Title {input_filename} interview_jane_doe.wav
Output File Basename {album}-{title}-denoised The Placeholder Podcast-interview_jane_doe.wav-denoised

The next example shows how you can create truly unique output file basenames with timestamps. Here, a track number is, together with the input file basename, added up as episode title. So, your unique output file basename could be a combination of this generated episode title with the time and date, when your Auphonic production was started:

Field Input Output
Input File
Basename
interview_jane_doe interview_jane_doe
Track 25 25
Title {track}{input_basename} 25interview_jane_doe
Output File Basename {title}_{production_created_at:%H:%M_%m/%d} 25interview_jane_doe_19-05_01-30

See example 3 for more time formatting examples, and this table for the full list of symbols that can be used for formatting dates.

2. Deriving "Title" and "Summary" from Podcast Metadata

If the input file metadata contains a track number (alternatively, provided by API or web form), it can be referenced in other fields. Along with the name of the podcast, stored in the "album" field, a value for the title can be created as well as a value for a summary containing all the information:

Field Input Output
Track 25 25
Album The Placeholder Podcast The Placeholder Podcast
Tags Anniversary Anniversary
Title {album}, Episode No. {track} The Placeholder Podcast, Episode No. 25
Summary {title} - {tags.0} The Placeholder Podcast, Episode No. 25 - Anniversary

3. Adding Time and Date

The following time and date example outputs would be possible for a production created at 7:05 pm on Saturday, January 30th in 1999:

Field Input Output
(any) example-{production_created_at} example-1999-01-30
(any) podcast-{production_created_at:%H:%M-%m/%d/%Y} podcast-19:05-01/30/1999
(any) output-{production_created_at:%I:%M%p-%m%d%y} output-7:05PM-013099
(any) record-{production_created_at:%a-%Y-%b-%d} record-Sat-1999-Jan-30

See this table for the full list of symbols that can be used for formatting dates.

4. Using List Field "Outputfiles"

For the list fields "tags", "chapters", "outputfiles", and multitrack "input_filename/basename", you need to reference every value separately by adding .N to your variable – Where N stands for ascending ordinal numbers starting from 0.
Combined with the .N you can refer to the format, bitrate, suffix, and ending of every selected output file, for example {outputfiles.0.format} refers to the format of the first output file in your list of outputfiles:

  • Output File 1 – format: WAV 16-bit PCM, bitrate: optimal, suffix: lossless, ending: wav
  • Output File 2 – format: MP3, bitrate: 112 kbps, suffix: lossy, ending: mp3

Field Input Output
(any) file1-{outputfiles.0.suffix}-{outputfiles.0.ending} file1-lossless-wav
(any) file2-{outputfiles.1.format}-{outputfiles.1.bitrate}kbps file2-lossy-mp3-112kbps
(any) file2-bitrate-{outputfiles.1.bitrate:04} file2-bitrate-0112
The bitrate output of the last row is formatted with 4 digits, defined by the suffix :04 attached to the variable.

For all available options, please see the Table of List Variables.

All Metadata Variables

The following variables are available:

Variable Referring to Field
{input_filename} Full filename of the input file in a singletrack production
{input_basename} Basename of the input file in a singletrack production
(inputfile.wav becomes inputfile)
{title} Title
{artist} Artist
{album} Album
{track} Track
{genre} Genre
{year} Year
{subtitle} Subtitle
{publisher} Publisher
{url} URL
{license} License (Copyright)
{license_url} License URL
{summary} Summary (Description)
{output_basename} Output File Basename
{production_created_at} Time and date of production creation
{production_modified_at} Time and date of production modification

List Variables Referring to List Field
{input_filename.N} Full filename of the input file of N-th track in a multitrack production
{input_basename.N} Basename of the input file of N-th track in a multitrack production
(inputfile.wav becomes inputfile)
{tags.N} N-th Element in Tags
{chapters.N.start} Start time of N-th Chapter
{chapters.N.title} Title of N-th Chapter
{chapters.N.url} URL of N-th Chapter
{chapters.N.image} Image file name of N-th Chapter
{outputfiles.N.format} Format of N-th Output File
{outputfiles.N.bitrate} Bitrate of N-th Output File
{outputfiles.N.suffix} Suffix of N-th Output File
{outputfiles.N.ending} Format ending of N-th Output File

For detailed use, please see Formatting Examples.

Conclusion

Metadata Variables are a powerful tool for organizing your productions whenever metadata values are set. Those field references are very convenient when distinguishing between different files at a glance, particularly when working with Presets, Batch Productions, or Watch Folders.

Please do not hesitate to contact us if you have any questions or feedback!







on

New Auphonic Transcript Editor

We're excited to roll out an upgraded version of our Transcript Editor, focusing on enhancing your transcription workflow and making it more intuitive, especially for mobile users. This overhaul introduces several key improvements and features designed to streamline the transcription process.

Click here for a Live Demo

What's new?

Line by Line Editing

Your transcript is being rendered line by line. This allows for precise editing of every single timestamp. Depending on the speech recognition engine editing can be done on word or phrase level.
For optimal results, we suggest utilizing our Auphonic Whisper ASR engine.

A paragraph with 9 lines, every line represents a "subtitle line" (.vtt, .srt).

You can split or combine paragraphs and lines using the Enter and Backspace keys. Our new Playback Slider enables seamless scrolling through the text, while we highlight the currently selected word as you go. With the switchable Play on Click function you can start your playback from anywhere in the transcript.

Automatic Shownotes and Chapters

If you enable Automatic Shownotes and Chapters in the Production form, we include AI generated shownotes and chapters directly into the Transcript Editor. You can edit Chapter Times and Text directly within the Transcript Editor. Once you click Save (top right), any modifications made within the shownotes and chapters will also be saved back to the production.

Screenshot of Automatic Shownotes and Chapters withing the Transcript Editor.

You are also able to edit chapter times directly within the transcript editor. Please note that this only works within the Transcript section of the editor to ensure precise placement of chapters.

Screenshot of Edit Chapter Time.

Local History: Undo and Redo

Our Local History feature offers convenient undo and redo functionality. This means you can effortlessly revert changes or redo them as needed, providing you with greater control and flexibility during the editing process.

Edit Speakers

Our revamped Transcript Editor automatically assigns speakers in Multitrack Productions. You can use the Track Identifier in our production form to assign speakers and easily edit, remove, or add new ones within the Transcript Editor. So it's clear who says what at any time!

Screenshot of Edit Speaker.

Confidence Highlighting

Within the transcription areas of low confidence are highlighted with our Highlight Low Confidence feature. So you can check up on the AI and edit words it is not entirely confident about! It also shows you how confident the AI is about the specific area if you hover over it.

Offline Mode

This feature enables you to download and share the Transcript Editor for offline editing. This means you can share the *.html file with someone else for editing purposes. Simply use the Download Editor action to obtain an offline version of the Transcript Editor.

Export Transcript

You can export your transcript in all currently relevant formats (.srt, .vtt, .txt, .html, .pdf) and include them in your publishing process or video editing software.

Screenshot of Export Transcript.

Responsive Design

We want to make podcasting as easy as possible. The responsive design of our Transcript Editor allows for fast editing on mobile devices - so you can edit what you're saying anywhere, anytime!

Screenshot of Transcript Editor on a mobile device.

Why use it?

Apple Podcasts paved the way for all podcasts to feature transcripts, so users can read through what you are saying and look for specific parts of your talk they're interested in revisiting. That could be book recommendations, advertisements or one of your ideas you want to share with the world.

We're trying to make the transcription process as automated and easy for you as possible. Especially when using Multitrack Productions we strive to take all the work off of you and let the AI do its magic.

Additionally, we aim to simplify manual corrections wherever automated processes encounter challenges, ensuring that even these adjustments are as effortless as possible.

Try it now on auphonic.com!

Feedback

If you have feature requests or feedback for our new Transcript Editor, please let us know! You can also leave a comment in the feedback section on the status page of your specific production.
We're looking forward to hearing from you!







on

Independently control Noise, Reverb and Breath Reduction Amounts

Responding to your feedback, we are now proud to present new separate parameters for noise, reverb, and breath reduction to give you more flexible control for your individual, best output results.
Find all the new parameters below and listen to the Audio Examples to get a closer impression of the upgrade.


What's the update about?

Before

Previously, you could only set the Denoising Method and one reduction amount, that was used for all elements.
Depending on the selected method, you were already able to decide whether music, static, or changing noises should be removed, but there was no setting to keep the typewriter sound effects while removing the reverb, for example.

Now

With our latest upgrade, you can now set the reduction amounts separately for noise, reverb, and breathing sounds.
For example, you could completely remove the background noise while reducing the reverb just a little to enhance speech intelligibility but keep the atmosphere. Like we did in Audio Example 1.
Many of you have also asked about the possibility of slightly reducing breath sounds rather than eliminating them completely. In Audio Example 2 we demonstrate how you can prevent your audio from sounding strange and unnatural by reducing instead of eliminating all breathing sounds.

To all of you who are happy with the results and don't want anything to change, relax:
If you don't change the default settings, the noise reduction algorithms work exactly the same as before.

Note: As the 'Static Denoiser' removes only stationary noise, there are no 'Remove Reverb' and 'Remove Breathings' parameters available for this denoising method.

New Parameters

Screenshot of the new Noise Reduction Parameters in the production form.

In order to use the new noise reduction features, you may separately set the following parameters:

  • Denoising Method: (unchanged) Select what kind of noise you want to remove.
    [Dynamic Denoiser (default), Speech Isolation, Static Denoiser]
    Note that the parameters 'Remove Reverb' and 'Remove Breathings' are NOT available for Static Denoiser!
  • Remove Noise: Select the amount of noise you want to remove.
    [100 dB (default), Disable Denoise, 3 dB, 6 dB, ..., 100 dB (full)]
  • Remove Reverb: Select the amount of reverb you want to remove.
    [100 dB (default), Disable Deverb, 3 dB, 6 dB, ..., 100 dB (full)]
  • Remove Breathings: Select the amount of breathings you want to remove.
    [Off (default), 3 dB, 6 dB, ..., 100 dB (full)]

Feel free to experiment with all the options to find your preferred parameter settings! Editing and reprocessing existing productions does not cost any additional credits as long as you don't change the input file.

Listen to the results:

1. Reverb reduction with full noise elimination

For the first audio example by conduitministries.com we set the 'Remove Noise' amount to 100 dB (full) and varied the 'Remove Reverb' amount starting from 0 dB (Off) to 12 dB (medium) and to 100 dB (full). Listen to how first the noise is gone and then step by step the reverb is lower:

Original
-100dB Denoise
-0dB Deverb  
-100dB Denoise
-12dB Deverb  
-100dB Denoise
-100dB Deverb  

2. Breathing sound reduction

In the breathing reduction audio example by LibriVox.org we used the 'Remove Breathing' amounts increasing from the original audio with 0 dB (Off) to 12 dB (medium) and to 100 dB (full) reduction.
In the result files you can hear, that the 100 dB (full) elimination leads to weird, unnatural-sounding pauses, that can be prevented by just reducing the breathing sounds:

Original
-12dB Debreath
-100dB Debreath

Try it now on auphonic.com!

Feedback

We hope you like our upgraded version of the Noise Reduction Algorithms with new parameters for more control.
If you have more feature requests or feedback for us, please let us know! You can also leave a comment in the feedback section on the status page of your specific production. We're looking forward to hearing from you!







on

Speaker Identification in Single Track Productions

We're excited to announce the release of our new speaker diarization feature! With our latest update, we can now detect individual speakers within a single audio recording, allowing users to see precisely who said what and when. This information is then reflected in the transcript.
Previously, this feature was only available in Multitrack Productions.

What is Speaker Diarization?

Speaker diarization is the process of identifying different speakers and segmenting an audio file accordingly, making it easier to accurately determine who said what. This is particularly useful in scenarios like meeting recordings, interviews, and podcasts involving multiple speakers.

What's New

Until now, Auphonic users had to use our Multitrack Production Workflow to gain access to speaker identification. With our recent update, this feature is now also accessible for single track productions, thereby further simplifying our workflow.

How It Works

When selecting Auphonic Whisper ASR as your speech recognition engine on our production page, the option to use “Speaker Detection” appears. You can select the number of speakers in the recording yourself, or have the AI identify and label them automatically.

Select "speaker detection" in the speech recognition menu

The identified speakers will then be shown next to what they said in the captions of our audio player and within the generated transcript. We know exactly who is saying what at any given time.

If you need to change speakers' names, you can edit them in our Transcript Editor. By changing the name of the speaker, the changes will be applied automatically within the whole transcript.

Click here for an example transcript to play around with!

The transcript editor also lets you edit the generated transcript's content and correct any mistakes. This gives our users the final say in quality control.

For more information on how to use our transcript editor, click here!

Conclusion

With our newly released speaker detection feature for single track productions, we simplified the automatic transcription process even further. You can drop a single audio file into our algorithms, the AI automatically detects and transcribes its contents and attributes speakers correctly. And if the model should unexpectedly be wrong at any time, you can correct it in our Transcript Editor.

Feedback

We're looking forward to having you utilize our new speaker diarization feature!
Please, feel free to send us feedback through our Contact Form or submit it directly in the status page of your production.
Happy detecting!







on

Introducing Auphonic Team Accounts

The new Auphonic team accounts streamline billing for companies, teams, or groups and are available to business users. A team account's owner can invite team members and admins, all of whom share the team account's credits for their productions.

The team account feature only impacts billing. Team members cannot access each other’s productions, presets, etc.
If you're interested in switching to a team account, please contact us!

What are Auphonic Teams?

A team consists of an Owner with Members and optional Admins. The Owner and Admins can add and remove team members as well as change their roles in the team. Every added account can use Auphonic as usual, but only the account Owner's credits are used for billing productions.

This simplifies the invoicing process for teams using multiple Auphonic accounts.
One Owner provides the credits, and all of the team can use them.

Overview of the team accounts section in the account settings.

Inviting Users

After gaining access to the team account feature, you can invite new team members via the team account section on the account settings page. Invited team members can be:

  • Existing Auphonic users
  • New users who do not yet have an account
When inviting users, you can assign them a role: Member or Admin. Sent invitations can be cancelled at any time.

Managing Team Account Members

The team account Owner and Admins can remove members and change roles within the team.

Editing roles

Roles and Permissions

Within the team, there are three roles: Owner, Admin and Member. Each role has specific permissions:

All roles and permissions.

Note, that the Owner role - held by the account creator - cannot be changed manually.
Please contact us if you want to change the owner of a team account!

Billing

The Owner of the team account is responsible for purchasing credits. Productions made by team members will use the team account's credits. As a team account member, you will see the team account owner's credit balance instead of your own.

If you wish to access your personal credits, you can leave the team at any time.

Conclusion

With the introduction of team accounts, we provide a solution to simplify the production process for teams. One account is responsible for providing the credits and all team members can use them collectively!

Contact us if you are a business user and interested in using our new feature.

Feedback

We are always happy to receive feedback on our features!
If you have thoughts, ideas or recommendations, drop us a line using our contact form or email us directly.

We're looking forward to hearing from you!







on

How to Make Money with AI Photography

Hey there, fellow shutterbugs! It’s your favorite AI-obsessed photographer here, ready to spill the beans on how to use Artificial Intelligence to make money in photography. I know, I know – I used to be the loudest anti-AI voice in the room. But then I realized: if you can’t beat ’em, join ’em (and make some cash while you’re at it)!

So, I dove headfirst into the world of AI-generated/modified images, and let me tell you, it’s been a game-changer. No more hours spent editing and retouching – now, I can create stunning, sellable images with just a few clicks. And the best part? The money’s rolling in like never before.

Ready to join the AI revolution and start raking in the dough? Let’s get started!

How to Make Money with AI-Generated Photography

AI art is taking the world by storm. And if you’ve got a knack for creating stunning visuals with AI tools, you might be wondering how to turn your passion into profit. Well, I’ve got good news for you. There are plenty of ways to monetize your AI-generated photography skills.

Continue reading How to Make Money with AI Photography



  • Photography Tips & Tricks

on

AI Is About to Change Education Forever (Again)

Change is a scary word. So when we hear it non-stop in conversations about AI, it’s no surprise that many people feel a sense of unease. But what if AI could actually be used for good? Imagine a world where technology is not just about profits and efficiency, but also about improving education and learning opportunities for all. Sal Khan believes that future is possible. If his name sounds familiar, that’s probably because you’ve picked up a skill or two through his widely-used learning platform, Khan Academy. And if that’s the case, you wouldn’t be alone. Some 170 million people around the world currently use its educational resources on subjects ranging from Algebra 1 to Chemistry. A lot of this help is video-based, but they’ve begun introducing adaptive software with interactive exercises more recently as well. Looking at the big picture, Sal hopes to build Khan Academy into a fully-fledged digital tutor capable of both teaching lessons and providing personalized learning experiences to users. Does AI Enable Cheating In Schools? The widespread popularization of AI has garnered some serious questions about cheating in schools. Teachers and parents are voicing concern that kids will use the technology to cut corners, or...

The post AI Is About to Change Education Forever (Again) first appeared on Chase Jarvis.

The post AI Is About to Change Education Forever (Again) appeared first on Chase Jarvis.




on

Playing It Safe? How to Step Outside Your Comfort Zone

It’s incredibly easy to settle into the comfort of the familiar—the same routines, the usual habits, and the paths well-trodden. But somewhere in the back of your mind, isn’t there a small voice urging you to shake things up, to take a risk, to venture into uncharted territory?  Maybe you’ve experienced this feeling while watching someone fully immerse themselves in their craft on stage—whether through music, dance, or a bold stand-up routine. Isn’t it absolutely thrilling? I’ve always been captivated by those moments. There’s something magnetic about witnessing someone throw caution to the wind and dive headfirst into their passion. It’s like watching a rockstar being born right in front of your eyes. This bravery, this unfiltered expression of self, isn’t just reserved for the few. It’s something we all possess. But it demands that we stop playing it so damn safe. Step Outside Your Comfort Zone The truth is, comfort zones are often padded with fear—fear of failure, fear of judgment, fear of the unknown. Yet, every major breakthrough in creativity and personal growth sprouts from stepping beyond these boundaries. Jon Batiste, a maestro of emotion and authenticity, once expressed the dual reality of embracing potential failure with unwavering...

The post Playing It Safe? How to Step Outside Your Comfort Zone first appeared on Chase Jarvis.

The post Playing It Safe? How to Step Outside Your Comfort Zone appeared first on Chase Jarvis.




on

Stephanie Harrison: How Hacking Happiness (Actually) Works

Stephanie Harrison’s story starts with a mindset similar to many in their early 20s. Gainfully employed, ambitious, and with all of New York City at her feet, she was exactly where she wanted to be. On paper. Something felt missing, even though her longtime dream of living independently in the Big Apple had been achieved. She recalls being miserable without a clear idea of why. Eventually, ignoring the problem took its toll. Her body began to rebel with physical symptoms – rashes, sores, and panic attacks that left her feeling lost and desperate for answers. So she found them by reading up on the science of happiness. Years of research and reflecatino later, Stephanie has found the hack. Thousands areound the world now seek to learn it through her philosophy-led company, The New Happy, which has been recognized by the likes of CNBC, Fast Company, Forbes, and Harvard Business Review. In this episode of the podcast, we have a candid conversation about what true fulfillment looks like and how to achieve it in a world that prioritizes productivity and external markers of success over inner peace. Question What People Say About Happiness The first, most direct way anyone can fix...

The post Stephanie Harrison: How Hacking Happiness (Actually) Works first appeared on Chase Jarvis.

The post Stephanie Harrison: How Hacking Happiness (Actually) Works appeared first on Chase Jarvis.




on

5 Proven Ways to Get Things Done (No Motivation Required)

Do you ever feel like you’re stuck in a loop, endlessly waiting for that burst of motivation to strike? You’re not alone. The fear of wasted potential and the desire for meaningful progress haunt creators and entrepreneurs alike. But here’s the hard truth: motivation is unreliable. It’s fleeting. It can vanish as quickly as it appears. If you’re tired of waiting for that perfect moment to take action, it’s time to shift your focus. What if I told you that momentum, not motivation, is the true driver of success? A Whisper of Intuition You know my story. From a young age, I betrayed my true passions, suppressing my creativity in favor of playing it safe. It wasn’t until my grandfather’s sudden passing left me with his cameras that I listened to that inner voice. It was a nudge, barely louder than the noise around me, but I took the leap. Following my gut instead of my conditioned safety net changed my life in ways I couldn’t have imagined. Just like that Chinese proverb: the best time to plant a tree was 20 years ago. The second best time is now. This isn’t just true for me; it’s true for all...

The post 5 Proven Ways to Get Things Done (No Motivation Required) first appeared on Chase Jarvis.

The post 5 Proven Ways to Get Things Done (No Motivation Required) appeared first on Chase Jarvis.




on

AJ Jacobs: Can You Live Fully by the Constitution in 2024?

In this episode, New York Times bestselling author, A.J. Jacobs shares his fascinating approach to interpreting America’s foundational document in his latest book, “The Year of Living Constitutionally: One Man’s Humble Quest to Follow the Constitution’s Original Meaning”. An acclaimed journalist known for his unorthodox social experiments, A.J.’s work blends memoir, science, humor and a dash of self-help. In our conversation, A.J. takes us on a hilarious yet insightful journey into his attempt to understand the Constitution by living as closely as possible to its original meaning. We delve into his personal experiences, the larger debate about how the Constitution should be interpreted, and the Constitution’s relevance in today’s world. Some highlights we explore: Learn more about A.J. Jacobs’ unique experiments Tips on how to generate creative ideas and how to choose the right ones Learn more about what living 18th-century life is like in modern times How to apply the constitution, two-party system, and founding fathers into this experimental life What A.J. learned from leading this experimental life and his thoughts post-experiment And more… Enjoy! FOLLOW A.J. JACOBS: Facebook | twitter | website Listen to the Podcast Subscribe  

The post AJ Jacobs: Can You Live Fully by the Constitution in 2024? first appeared on Chase Jarvis.

The post AJ Jacobs: Can You Live Fully by the Constitution in 2024? appeared first on Chase Jarvis.




on

Is Your Comfort Zone Holding You Back?

Imagine you’re gazing out at the vast expanse of possibility before you. On one side lies the safety of the familiar, and on the other, the allure of growth and the uncertainty that comes with chasing your dreams. What would you choose—honestly? It’s a decision we’re faced with every day, in both big and small ways. Do we play it safe, sticking to the well-trodden path, or do we venture into uncharted territory, risking failure in pursuit of something greater? This choice will define who you are and who you eventually become. Because here’s the truth: growth demands courage. It asks you to stretch beyond your limits, to face your fears head-on, and to embrace discomfort over and over again. But while growth can be bumpy sometimes, choosing growth over safety comes with its own kind of power that you can’t find anywhere else. This is when we feel the most alive. The Illusion of the Comfort Zone Let’s talk about the comfort zone. It’s that cozy little bubble where everything feels safe and predictable. You know the drill, you have your routines, and there’s little risk involved. But here’s the kicker—the comfort zone is an illusion. It gives you...

The post Is Your Comfort Zone Holding You Back? first appeared on Chase Jarvis.

The post Is Your Comfort Zone Holding You Back? appeared first on Chase Jarvis.




on

Cal Newport on The Dark Side of Modern Productivity

In this episode, Cal Newport shares his revolutionary approach to productivity from his new book, Slow Productivity: The Lost Art of Accomplishment Without Burnout. Cal, a computer science professor at Georgetown and author of Deep Workand Digital Minimalism, explores how principles from historical figures like Newton, Galileo, and Jane Austen can help us thrive in today’s fast-paced world. We discuss his ‘lost art’ approach, debunk the myth of multitasking, and examine how technology impacts our focus and mental well-being. Some highlights we explore: How Newton’s work habits can be integrated into modern productivity practices. Why focusing on single tasks is more effective than multitasking. Advice on reducing social media’s impact to regain focus. Techniques from Cal’s new book to accomplish more without burning out. And more! Enjoy! FOLLOW CAL NEWPORT: twitter | youtube | website Listen to the Podcast Subscribe  

The post Cal Newport on The Dark Side of Modern Productivity first appeared on Chase Jarvis.

The post Cal Newport on The Dark Side of Modern Productivity appeared first on Chase Jarvis.




on

The Deep Connection Between Creativity and Meditation

We often think of creativity and meditation as separate practices – one active and expressive, the other quiet and introspective. But what if these two seemingly different pursuits are actually deeply interconnected, each enhancing and amplifying the other? This fascinating relationship between creativity and meditation was at the heart of my recent conversation with Henry Shukman, an award-winning poet, author, and Zen master. Henry’s unique background bridges the worlds of artistic expression and contemplative practice, giving him profound insights into how meditation can unlock our creative potential. The Creative Benefits of a Quiet Mind One of the core benefits of meditation is learning to quiet the constant chatter of our minds. As Henry explained, this mental stillness creates space for new ideas and inspirations to emerge: “When we meditate regularly, we’re training our minds to be less caught up in the constant stream of thoughts and worries. And in that spaciousness, creativity has room to flourish. Novel connections and insights can bubble up from our subconscious.” This aligns with what many great artists and innovators have reported – that their best ideas often come during moments of mental quiet and relaxation. By cultivating a meditation practice, we’re essentially preparing fertile...

The post The Deep Connection Between Creativity and Meditation first appeared on Chase Jarvis.

The post The Deep Connection Between Creativity and Meditation appeared first on Chase Jarvis.




on

Arthur Brooks: The (Real) Reason You’re Not Happy

Arthur C. Brooks, a Harvard professor and happiness expert, reveals that the real reason you're not happy has nothing to do with external achievements. Instead, it lies within you. Learn the science and philosophy behind true happiness and uncover the steps you can take to find lasting contentment. By following these insights and practices, you can start your journey toward a happier, more fulfilling life. Remember, happiness is not a destination but a direction. Start walking that path today.

The post Arthur Brooks: The (Real) Reason You’re Not Happy first appeared on Chase Jarvis.

The post Arthur Brooks: The (Real) Reason You’re Not Happy appeared first on Chase Jarvis.