oni

Recognizing traumatic brain injury as a chronic condition fosters better care over the survivor's lifetime

A commentary, published in the Journal of Neurotrauma, calls for traumatic brain injury to be recognized as a chronic condition as are diabetes, asthma, depression and heart failure. To provide comprehensive care for traumatic brain injury throughout individuals' lifespans, the authors propose that coordinated care models they and others have developed, tested and applied to various populations—including older adults, individuals living with depression and post-intensive care unit survivors—be adapted to improve communication and integration between brain injury specialists—including physical medicine and rehabilitation clinicians—and primary care physicians, fostering better long-term patient care for traumatic brain injury survivors and more support for both patients and their families.




oni

Chronic Brain Trauma Is Extensive in Navy’s Elite Speedboat Crews

The pounding that sailors’ brains take from years of high-speed wave-slamming in the Special Boat Teams can cause symptoms that wreck their careers — and their lives.




oni

Auphonic Joins NVIDIA Inception

We are proud to announce that we recently joined the NVIDIA Inception Program, which will help to speed up our deep learning development process and therefore offer the best possible audio processing tools to our users.

What is NVIDIA Inception

NVIDIA is a global leader in hardware and software for Artificial Intelligence (AI).
Their NVIDIA Inception Program will enable us to leverage NVIDIA's cutting-edge technology by accessing more diverse cloud and GPU (Graphics Processing Unit) product offerings, which are used in most Machine Learning and Deep Learning model training instances worldwide. This will allow us to streamline AI development and deployment and train bigger machine-learning models to test and evaluate algorithms faster. The program will also offer us the opportunity to collaborate with industry-leading experts and other AI-driven organizations, among other things.

Our Deep Learning Development Process

For our development process, more GPU capacity means a great saving of time and therewith of course a saving of costs. As an example, one training cycle of our dynamic denoiser model takes almost a week trained with GPUs, however the same training cycle trained with CPUs would take several months.

To illustrate, a CPU (Central Processing Unit) can be compared to a race car, which is very fast but can only transfer a small number of packages, while a GPU in this comparison is a big truck, which can transfer a huge number of packages more slowly. Deep learning algorithms require for training very large datasets consisting of thousands of files, therefore our 'trucks', the GPUs, are the best hardware to choose processing multiple computations simultaneously.

The more GPU capacity we can use, the faster we get results for our tested algorithms, and the faster we know which way we should follow to offer our users the best possible audio processing tools.
Unfortunately, the world is right in the middle of a Global Chip Shortage, so the latest GPUs are very hard to get and super expensive to purchase – unless you have a partnership with a GPU manufacturer.

Conclusion

We are happy to join such a renowned program and look forward to the updates to our product that we will be able to implement and potentially a greater industry transformation.

You can read our full press release here: AuphonicNVIDIAInceptionPressRelease (pdf)







oni

Auphonic Speech Recognition Engine using Whisper by OpenAI (Beta)

Today we release our first self-hosted Auphonic Speech Recognition Engine using the open-source Whisper model by OpenAI!
With Whisper, you can now integrate automatic speech recognition in 99 languages into your Auphonic audio post-production workflow, without creating an external account and without extra costs!

Whisper Speech Recognition in Auphonic

So far, Auphonic users had to choose one of our integrated external service providers (Wit.ai, Google Cloud Speech, Amazon Transcribe, Speechmatics) for speech recognition, so audio files were transferred to an external server, using external computing powers, that users had to pay for in their external accounts.

The new Auphonic Speech Recognition is using Whisper, which was published by OpenAI as an open-source project. Open-source means, the publicly shared GitHub repository contains a complete Whisper package including source code, examples, and research results.
However, automatic speech recognition is a very time and hardware-consuming process, that can be incredibly slow using a standard home computer without special GPUs. So we decided to integrate this service and offer you automatic speech recognition (ASR) by Whisper processed on our own hardware, just like any other Auphonic processing task, giving you quite some benefits:

  • No external account is needed anymore to run ASR in Auphonic.
  • Your data doesn't leave our Auphonic servers for ASR processing.
  • No extra costs for external ASR services.
  • Additional Auphonic pre- and post-processing for more accurate ASR, especially for Multitrack Productions.
  • The quality of Whisper ASR is absolutely comparable to the “best” services in our comparison table.

How to use Whisper?

To use the Auphonic Whisper integration, you just have to create a production or preset as you are used to and select “Auphonic Whisper ASR” as “Service” in the section Speech Recognition.
This option will automatically appear for Beta and paying users. If you are a free user but want to try Whisper: please just ask for access!

When your Auphonic speech recognition is done, you can download your transcript in different formats and may edit or share your transcript with the Auphonic Transcript Editor.
For more details about all our integrated speech recognition services, please visit our Speech Recognition Help and watch this channel for Whisper updates – soon to come.

Why Beta?

We decided to launch Whisper for Beta and paying users only, as Whisper was just published end of September and there was not enough time to test every single use case sufficiently.
Another issue is the required computing power: for suitable scaling of the GPU infrastructure, we need a beta phase to test the service while we are monitoring the hardware usage, to make sure there are no server overloads.

Conclusion

Automatic speech recognition services are evolving very quickly, and we've seen major improvements over the past few years.
With Whisper, we can now perform speech recognition without extra costs on our own GPU hardware, no external services are required anymore.

Auphonic Whisper ASR is available for Beta and paying users now, free users can ask for Beta access.
You are very welcome to send us feedback (directly in the production interface or via email), whether you notice something that works particularly well or discover any problems.
Your feedback is a great help to improve the system!







oni

New Auphonic AutoEQ Filtering (Beta)

In addition to our Leveler, Denoiser, and Adaptive 'Hi-Pass' Filter, we now release the missing equalization feature with the new Auphonic AutoEQ.
The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording, to remove sibilance (De-esser) and to create a clear, warm, and pleasant sound - listen to the audio examples below to get an idea about what it does.

Screenshot of manually adjusted example settings for the equalizer plug-in 'Pro-Q3' by fabfilter.

What is Equalization and why is it difficult?

Equalization (EQ) in audio recording and reproduction is the process of adjusting the volume of different frequency bands within a signal.
The following vocal EQ cheat sheet, published by Producer Hive, will give you a small impression, of what can be influenced by equalizing:

Vocal EQ Cheat Sheet by Producer Hive.

On the other hand, it is very easy to ruin a good voice recording with manual heavy-handed equalization, resulting in voices, that sound very sharp or muddy or even like the speaker had a blocked nose.
Besides the skill and experience of an audio engineer, manual adjustments of frequencies also require a very good and linear studio playback device. For example, performing manual equalization with strongly bass-heavy speakers would most likely lead to a very sharp, unpleasant listening experience using treble-heavy headphones.

For singletrack productions with more than one speaker, equalizing is also a very complex and time-consuming process, as every voice has its unique frequency spectrum and needs its own equalization. One could separate speakers with cuts or create a track envelope to fade from one speaker to another, however, any solution is a very tedious job, if you do it by hand.

That is where the Auphonic AutoEQ comes in! All those steps are now available in just one click!

How does the Auphonic AutoEQ work?

The Auphonic Web Service analyzes your audio content and classifies the audio file into small and meaningful segments like music, silence, different speakers, etc. to process every single segment with the best matching algorithms.
All our features like the Adaptive Leveler, Dynamic Denoising, Adaptive 'Hi-Pass' Filtering, and now the new AutoEQ filter option are built on top of this basic processing.

Using Auphonic AutoEQ, spectral EQ profiles are created for each speaker separately and permanently changing over time. The aim of those time-dependent EQ profiles is to create a constant, pleasant sound in the output file even if there are slightly changing voices in the record, for example, due to modified speaker-microphone positions.

Audio Examples

Here are two short audio examples, which demonstrate some features of our AutoEQ.
We recommend listening with headphones so you can hear all the details.

Example 1. Female Speaker with Background Music

In the following example (BCB: The Voices of Bainbridge Island) of a female narrator speaking while background music is playing, you can easily recognize quite sharp 'sss' sounds in the female voice. This sharpness in the female voice is removed by the so-called De-essing feature of the Auphonic AutoEQ, while the background music is not changed.

Original:
AutoEQed:

Example 2. Dialog of Male and Female Speakers

The next example (BCB: The Voices of Bainbridge Island) shows how the AutoEQ optimizes a singletrack record containing two speakers with different voice characteristics. Our AutoEQ algorithms analyze each voice separately and calculate the matching frequency adjustments to optimize the voice of every single speaker.

Original:
AutoEQed:

AutoEQ Beta Integration in the Auphonic Web Service

To use the Auphonic AutoEQ, you just have to create a production or preset as you are used to, toggle “Advanced Parameters” on the top right in the section “Audio Algorithms” and select “Voice AutoEQ” within “Filtering”:

For a first test period, the AutoEQ will only be available for Beta and paying users, to incorporate your feedback and finalize an optimized version.
If you are a free user but want to try Auphonic AutoEQ: please just ask for access!

Practical Tips

For best results using Auphonic AutoEQ, however, it is still necessary, that your audio content is of sufficiently good quality, as no equalizer can make up frequencies that are not there in the first place. Audio files with low bitrates often lack important frequencies, that cannot be recovered by equalizing. AutoEQ is just a feature to boost or cut individual frequency bands, not a bandwidth extension. For more information about required audio quality, see the former blog post: Audio File Formats and Bitrates for Podcasts.

Another important topic is the definition of the 'best result'. Equalizing is a very subjective task, that differs a lot depending on every personal opinion. So Auphonic AutoEQ is set up to follow quite conservative rules of equalizing and rather apply subtle tweaks and remove obvious problems, than support personal preferences. This also means, your record will experience no significant changes from Auphonic AutoEQ if it sounds reasonably OK or pretty good already.

Conclusion

Auphonic audio post production algorithms keep getting better and better in leaps and bounds lately, offering you new Beta Features: Beta Auphonic Denoiser, Beta Auphonic Speech Recognition, and Beta Auphonic AutoEQ.
Right now we are fine-tuning all our current Beta Features with high intensity to release a new upgraded version of our Auphonic Web Service as soon as possible.
Please watch this channel for further updates – soon to come.

If you have any feedback for us or want to become a Beta user, you are very welcome to comment directly in our production interface or to contact us via email!







oni

New Auphonic Website, Free Advanced Algorithms and More Examples

To start a new decade of automatic audio post production with Auphonic, we are happy to launch a few updates:

New Website Design

Opening the new homepage today, you might have noticed that our website looked different from what you had been used to before. Keeping our customers’ feedback from last year in mind, we designed a new vision for Auphonic.

Our new website features a refreshed look with an improved, more engaging, and functional user experience. Moreover, a more straightforward, intuitive, and accessible navigation will give you a seamless workflow and a comfortable exploration of Auphonic’s features.
We hope it will be easier to explore the diversity of applications that Auphonic has. In the end, however, as before, you will have the same full functionality of Auphonic available to you and some extra features if you are using our paid packages or subscriptions.

Take a look yourself: New Auphonic Landing Page

Free Access to our Advanced and Beta Algorithms

In the past, only paying Auphonic users had access to the advanced algorithm parameters, to multitrack advanced audio algorithms, and to our Dynamic Denoising and AutoEQ beta models.

We now enabled all advanced algorithms for free users, and you can use them for 2 hours of audio free each month!

Using the Dynamic Denoiser, you can define whether Auphonic should remove only static or also fast-changing noises and if we should keep or eliminate music. For even greater speech intelligibility control, it is possible to manually adjust the amount of denoising to strike the perfect balance between clarity and ambiance.

The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording to remove sibilance (De-Esser) and to create a clear, warm, and pleasant sound.
The equalization of multi-speaker audio can be complex and time-consuming, as each voice requires its own unique frequency spectrum equalization. Our AutoEQ simplifies this process by creating separate, time-dependent EQ profiles for each speaker, ensuring a consistent and pleasant sound output despite any changes in the voices during the recording.

Our advanced algorithm parameters help you to meet all common audio specifications of platforms like Netflix, Audible, podcasts, broadcasters (EBU R128, ATSC A/85, radio and mobile, commercials) in one click. You can define a set of target parameters (integrated loudness, true peak level, dialog normalization, MaxLRA, MaxM, MaxS), like -16 LUFS for podcasts, and we will produce the audio accordingly.
In addition, they offer more control for multitrack productions and for the Adaptive Leveler.

We would like to give away free hours for new Auphonic users, to try out our free advanced algorithms. Please use this URL to register your new Auphonic account. the code is valid till end of March 2023 and will give you 5 extra production hours for the next month. Happy content creation!

More Audio Examples

There is no better way to experience Auphonic than hearing the difference our post production tool makes when applied to different types of audio and content.
We are happy to share that our new features page now contains some new audio examples you can listen to explore our web tool, and we will add even more examples in the next weeks.







oni

Introducing Metadata Variables in Auphonic

We've listened to your feedback and are excited to announce the introduction of metadata variables in Auphonic for more advanced use of our Basic and Extended Metadata.
This new feature allows you to use metadata fields from your input files to automate workflows. You can easily reference any field by using { curly brackets } and typing the field name, such as {title}, {artist}, {album}, {track}, and more.
To get started, take a look at our Formatting Examples and the Table of all Variables to see all the available options.

Whether you are using the Auphonic Web Service or our API, metadata variables can be applied whenever metadata values are set. They are particularly helpful when working with Presets, Batch Productions, or Watch Folders!

For instance, consider the left column of the following table as input ending up in the metadata as shown in the right column:

Field Input Output
Album MyPodcast MyPodcast
Track 25 25
Title Episode No. {track} of {album}! Episode No. 25 of MyPodcast!

Note:
Please mind that fields can not refer to each other in cycles (e.g., if {title} refers to {album}, {album} may not refer to {title}). While the input form will not show any errors, the metadata will most likely not be correct and the production will generate a warning.

Formatting Examples

1. Generating Output File Basenames

With metadata variables you can automatically generate your output file basename based on the input filename and podcast metadata, like album, track, and title.
If you have, for example, a podcast preset or input file with the album name "The Placeholder Podcast", you can automatically name your episode title like your input filename. By combining the metadata variables "album" name and your generated episode "title" with any text patterns, like "denoised" in this case, you can create your individual output file basename:

Field Input Output
Album The Placeholder Podcast The Placeholder Podcast
Title {input_filename} interview_jane_doe.wav
Output File Basename {album}-{title}-denoised The Placeholder Podcast-interview_jane_doe.wav-denoised

The next example shows how you can create truly unique output file basenames with timestamps. Here, a track number is, together with the input file basename, added up as episode title. So, your unique output file basename could be a combination of this generated episode title with the time and date, when your Auphonic production was started:

Field Input Output
Input File
Basename
interview_jane_doe interview_jane_doe
Track 25 25
Title {track}{input_basename} 25interview_jane_doe
Output File Basename {title}_{production_created_at:%H:%M_%m/%d} 25interview_jane_doe_19-05_01-30

See example 3 for more time formatting examples, and this table for the full list of symbols that can be used for formatting dates.

2. Deriving "Title" and "Summary" from Podcast Metadata

If the input file metadata contains a track number (alternatively, provided by API or web form), it can be referenced in other fields. Along with the name of the podcast, stored in the "album" field, a value for the title can be created as well as a value for a summary containing all the information:

Field Input Output
Track 25 25
Album The Placeholder Podcast The Placeholder Podcast
Tags Anniversary Anniversary
Title {album}, Episode No. {track} The Placeholder Podcast, Episode No. 25
Summary {title} - {tags.0} The Placeholder Podcast, Episode No. 25 - Anniversary

3. Adding Time and Date

The following time and date example outputs would be possible for a production created at 7:05 pm on Saturday, January 30th in 1999:

Field Input Output
(any) example-{production_created_at} example-1999-01-30
(any) podcast-{production_created_at:%H:%M-%m/%d/%Y} podcast-19:05-01/30/1999
(any) output-{production_created_at:%I:%M%p-%m%d%y} output-7:05PM-013099
(any) record-{production_created_at:%a-%Y-%b-%d} record-Sat-1999-Jan-30

See this table for the full list of symbols that can be used for formatting dates.

4. Using List Field "Outputfiles"

For the list fields "tags", "chapters", "outputfiles", and multitrack "input_filename/basename", you need to reference every value separately by adding .N to your variable – Where N stands for ascending ordinal numbers starting from 0.
Combined with the .N you can refer to the format, bitrate, suffix, and ending of every selected output file, for example {outputfiles.0.format} refers to the format of the first output file in your list of outputfiles:

  • Output File 1 – format: WAV 16-bit PCM, bitrate: optimal, suffix: lossless, ending: wav
  • Output File 2 – format: MP3, bitrate: 112 kbps, suffix: lossy, ending: mp3

Field Input Output
(any) file1-{outputfiles.0.suffix}-{outputfiles.0.ending} file1-lossless-wav
(any) file2-{outputfiles.1.format}-{outputfiles.1.bitrate}kbps file2-lossy-mp3-112kbps
(any) file2-bitrate-{outputfiles.1.bitrate:04} file2-bitrate-0112
The bitrate output of the last row is formatted with 4 digits, defined by the suffix :04 attached to the variable.

For all available options, please see the Table of List Variables.

All Metadata Variables

The following variables are available:

Variable Referring to Field
{input_filename} Full filename of the input file in a singletrack production
{input_basename} Basename of the input file in a singletrack production
(inputfile.wav becomes inputfile)
{title} Title
{artist} Artist
{album} Album
{track} Track
{genre} Genre
{year} Year
{subtitle} Subtitle
{publisher} Publisher
{url} URL
{license} License (Copyright)
{license_url} License URL
{summary} Summary (Description)
{output_basename} Output File Basename
{production_created_at} Time and date of production creation
{production_modified_at} Time and date of production modification

List Variables Referring to List Field
{input_filename.N} Full filename of the input file of N-th track in a multitrack production
{input_basename.N} Basename of the input file of N-th track in a multitrack production
(inputfile.wav becomes inputfile)
{tags.N} N-th Element in Tags
{chapters.N.start} Start time of N-th Chapter
{chapters.N.title} Title of N-th Chapter
{chapters.N.url} URL of N-th Chapter
{chapters.N.image} Image file name of N-th Chapter
{outputfiles.N.format} Format of N-th Output File
{outputfiles.N.bitrate} Bitrate of N-th Output File
{outputfiles.N.suffix} Suffix of N-th Output File
{outputfiles.N.ending} Format ending of N-th Output File

For detailed use, please see Formatting Examples.

Conclusion

Metadata Variables are a powerful tool for organizing your productions whenever metadata values are set. Those field references are very convenient when distinguishing between different files at a glance, particularly when working with Presets, Batch Productions, or Watch Folders.

Please do not hesitate to contact us if you have any questions or feedback!







oni

New Auphonic Transcript Editor

We're excited to roll out an upgraded version of our Transcript Editor, focusing on enhancing your transcription workflow and making it more intuitive, especially for mobile users. This overhaul introduces several key improvements and features designed to streamline the transcription process.

Click here for a Live Demo

What's new?

Line by Line Editing

Your transcript is being rendered line by line. This allows for precise editing of every single timestamp. Depending on the speech recognition engine editing can be done on word or phrase level.
For optimal results, we suggest utilizing our Auphonic Whisper ASR engine.

A paragraph with 9 lines, every line represents a "subtitle line" (.vtt, .srt).

You can split or combine paragraphs and lines using the Enter and Backspace keys. Our new Playback Slider enables seamless scrolling through the text, while we highlight the currently selected word as you go. With the switchable Play on Click function you can start your playback from anywhere in the transcript.

Automatic Shownotes and Chapters

If you enable Automatic Shownotes and Chapters in the Production form, we include AI generated shownotes and chapters directly into the Transcript Editor. You can edit Chapter Times and Text directly within the Transcript Editor. Once you click Save (top right), any modifications made within the shownotes and chapters will also be saved back to the production.

Screenshot of Automatic Shownotes and Chapters withing the Transcript Editor.

You are also able to edit chapter times directly within the transcript editor. Please note that this only works within the Transcript section of the editor to ensure precise placement of chapters.

Screenshot of Edit Chapter Time.

Local History: Undo and Redo

Our Local History feature offers convenient undo and redo functionality. This means you can effortlessly revert changes or redo them as needed, providing you with greater control and flexibility during the editing process.

Edit Speakers

Our revamped Transcript Editor automatically assigns speakers in Multitrack Productions. You can use the Track Identifier in our production form to assign speakers and easily edit, remove, or add new ones within the Transcript Editor. So it's clear who says what at any time!

Screenshot of Edit Speaker.

Confidence Highlighting

Within the transcription areas of low confidence are highlighted with our Highlight Low Confidence feature. So you can check up on the AI and edit words it is not entirely confident about! It also shows you how confident the AI is about the specific area if you hover over it.

Offline Mode

This feature enables you to download and share the Transcript Editor for offline editing. This means you can share the *.html file with someone else for editing purposes. Simply use the Download Editor action to obtain an offline version of the Transcript Editor.

Export Transcript

You can export your transcript in all currently relevant formats (.srt, .vtt, .txt, .html, .pdf) and include them in your publishing process or video editing software.

Screenshot of Export Transcript.

Responsive Design

We want to make podcasting as easy as possible. The responsive design of our Transcript Editor allows for fast editing on mobile devices - so you can edit what you're saying anywhere, anytime!

Screenshot of Transcript Editor on a mobile device.

Why use it?

Apple Podcasts paved the way for all podcasts to feature transcripts, so users can read through what you are saying and look for specific parts of your talk they're interested in revisiting. That could be book recommendations, advertisements or one of your ideas you want to share with the world.

We're trying to make the transcription process as automated and easy for you as possible. Especially when using Multitrack Productions we strive to take all the work off of you and let the AI do its magic.

Additionally, we aim to simplify manual corrections wherever automated processes encounter challenges, ensuring that even these adjustments are as effortless as possible.

Try it now on auphonic.com!

Feedback

If you have feature requests or feedback for our new Transcript Editor, please let us know! You can also leave a comment in the feedback section on the status page of your specific production.
We're looking forward to hearing from you!







oni

Introducing Auphonic Team Accounts

The new Auphonic team accounts streamline billing for companies, teams, or groups and are available to business users. A team account's owner can invite team members and admins, all of whom share the team account's credits for their productions.

The team account feature only impacts billing. Team members cannot access each other’s productions, presets, etc.
If you're interested in switching to a team account, please contact us!

What are Auphonic Teams?

A team consists of an Owner with Members and optional Admins. The Owner and Admins can add and remove team members as well as change their roles in the team. Every added account can use Auphonic as usual, but only the account Owner's credits are used for billing productions.

This simplifies the invoicing process for teams using multiple Auphonic accounts.
One Owner provides the credits, and all of the team can use them.

Overview of the team accounts section in the account settings.

Inviting Users

After gaining access to the team account feature, you can invite new team members via the team account section on the account settings page. Invited team members can be:

  • Existing Auphonic users
  • New users who do not yet have an account
When inviting users, you can assign them a role: Member or Admin. Sent invitations can be cancelled at any time.

Managing Team Account Members

The team account Owner and Admins can remove members and change roles within the team.

Editing roles

Roles and Permissions

Within the team, there are three roles: Owner, Admin and Member. Each role has specific permissions:

All roles and permissions.

Note, that the Owner role - held by the account creator - cannot be changed manually.
Please contact us if you want to change the owner of a team account!

Billing

The Owner of the team account is responsible for purchasing credits. Productions made by team members will use the team account's credits. As a team account member, you will see the team account owner's credit balance instead of your own.

If you wish to access your personal credits, you can leave the team at any time.

Conclusion

With the introduction of team accounts, we provide a solution to simplify the production process for teams. One account is responsible for providing the credits and all team members can use them collectively!

Contact us if you are a business user and interested in using our new feature.

Feedback

We are always happy to receive feedback on our features!
If you have thoughts, ideas or recommendations, drop us a line using our contact form or email us directly.

We're looking forward to hearing from you!







oni

The Impact Of AI Software On Architecture And Design: Revolutionizing Creativity And Efficiency

The emergence of AI software in the field of architecture and design has sparked a significant shift in how professionals approach their work. With advancem ...




oni

Recreating The Iconic 'Mouse in Manhattan' Scenery From Tom & Jerry Classic Cartoons

Tom and Jerry, the mischievous cat and clever mouse duo, have been captivating audiences for generations with their hilarious antics. As a child, I was capt ...



  • Design Roud-up

oni

California Study: Four Widely Used Neonicotinoid Pesticides Harm Bees

Center for Biological Diversity Press Release WASHINGTON – Four commonly used neonicotinoid pesticides can harm bees and other pollinators, according to a new analysis by California’s Department of Pesticide Regulation. The study found that current approved uses of the “neonics” … Continue reading




oni

Sur le bord du Site patrimonial de La Grave

Sous un ciel parsemé de nuages cotonneux, le site patrimonial de La Grave aux Îles de la Madeleine se déploie dans toute sa splendeur. Des voiliers élégants se balancent doucement sur l’eau calme du port, leurs mâts se dressant fièrement vers le bleu éclatant du ciel. À droite, des bâtiments en bois typiques bordent le...




oni

Sur le bord du Site patrimonial de La Grave II

Sous un ciel azur parsemé de nuages légers, le site patrimonial de La Grave aux Îles de la Madeleine s’étend avec une sérénité envoûtante. La plage de galets multicolores s’étire langoureusement jusqu’aux vagues qui viennent caresser doucement le rivage. Les maisons colorées, aux toits pointus et façades pittoresques, se dressent en une ligne harmonieuse, témoignant...




oni

Mike Pence Opposes Pardons for Jan. 6 Insurrectionists

Former Vice President Mike Pence spoke at the Dispatch's Summit 2024 and made clear he opposes President-elect Donald Trump's plan to pardon January 6 rioters - especially those who attacked police. "At the end of the day, I'll always believe that, by God's grace, we did our duty [on January 6] to support and defend the Constitution of the United States of America," Pence said according to Dispatch on social media.




oni

Meredith Brooks Abbott – American Impressionist

Modern American Impressionist, Meredith Brooks Abbott, is one of California’s leading plein air painters. Born in 1938 in picturesque Carpinteria, on California’s central coast, Abbott’s work focuses on the perishing rural landscape. She is a founding member of The Oak … Continue reading





oni

Recognizing traumatic brain injury as a chronic condition fosters better care over the survivor's lifetime

A commentary, published in the Journal of Neurotrauma, calls for traumatic brain injury to be recognized as a chronic condition as are diabetes, asthma, depression and heart failure. To provide comprehensive care for traumatic brain injury throughout individuals' lifespans, the authors propose that coordinated care models they and others have developed, tested and applied to various populations—including older adults, individuals living with depression and post-intensive care unit survivors—be adapted to improve communication and integration between brain injury specialists—including physical medicine and rehabilitation clinicians—and primary care physicians, fostering better long-term patient care for traumatic brain injury survivors and more support for both patients and their families.




oni

Chronic Brain Trauma Is Extensive in Navy’s Elite Speedboat Crews

The pounding that sailors’ brains take from years of high-speed wave-slamming in the Special Boat Teams can cause symptoms that wreck their careers — and their lives.




oni

Genotyping and Health Monitoring in Aquaculture

Boost aquaculture efficiency with advanced tools for precise genetic differentiation, pathogen detection, and real-time health monitoring. Enhance fish hea



  • Cell & Molecular Biology

oni

New Biomarkers Reveal More About Unexplained Chronic Itch

There are millions of people around the world who deal with chronic itching with no clear cause, or chronic pruritus of unknown origin (CPUO)



  • Cell & Molecular Biology

oni

3 Key Features of Pets Health Monitoring Systems

It’s always interesting to observe how technologies are transforming different industries by bringing the efficiency of many processes to a new level and also reducing costs. Healthcare is one of the industries that has changed a lot with the introduction of new tech solutions. However, when we are talking about healthcare we should mention not […]

The post 3 Key Features of Pets Health Monitoring Systems appeared first on 404 Tech Support.




oni

An Animation of Tectonic Plate Movement Over Millenia

Our planet has an outer layer made up of several plates, which move relative to one another. While we may take this knowledge for granted, this theory of p



  • Earth & The Environment

oni

Genotyping and Health Monitoring in Aquaculture

Boost aquaculture efficiency with advanced tools for precise genetic differentiation, pathogen detection, and real-time health monitoring. Enhance fish hea



  • Earth & The Environment

oni

Genotyping and Health Monitoring in Aquaculture

Boost aquaculture efficiency with advanced tools for precise genetic differentiation, pathogen detection, and real-time health monitoring. Enhance fish hea



  • Genetics & Genomics

oni

Correlation between sleep hygiene and risk of chronic illness

The connection between sleep patterns and chronic diseases has been an area of increasing interest, particularly as wearable devices like Fitbit become mor



  • Genetics & Genomics

oni

New Biomedical Device Offers Rapid Relief for Chronic Pain Patients

How can ultrasonic waves be used to treat chronic pain? This is what a recent study published in the journal Pain hopes to address as a team of researchers




oni

Correlation between sleep hygiene and risk of chronic illness

The connection between sleep patterns and chronic diseases has been an area of increasing interest, particularly as wearable devices like Fitbit become mor




oni

Study Links Chronic Cannabis Use with Sleep and Memory Problems

A study published in The American Journal of Drug and Alcohol found that long-term cannabis users experience interfered with sleep and memory. The findings




oni

New Biomarkers Reveal More About Unexplained Chronic Itch

There are millions of people around the world who deal with chronic itching with no clear cause, or chronic pruritus of unknown origin (CPUO)




oni

Correlation between sleep hygiene and risk of chronic illness

The connection between sleep patterns and chronic diseases has been an area of increasing interest, particularly as wearable devices like Fitbit become mor



  • Health & Medicine

oni

Study Links Chronic Cannabis Use with Sleep and Memory Problems

A study published in The American Journal of Drug and Alcohol found that long-term cannabis users experience interfered with sleep and memory. The findings



  • Health & Medicine

oni

New Biomarkers Reveal More About Unexplained Chronic Itch

There are millions of people around the world who deal with chronic itching with no clear cause, or chronic pruritus of unknown origin (CPUO)



  • Health & Medicine

oni

New Biomarkers Reveal More About Unexplained Chronic Itch

There are millions of people around the world who deal with chronic itching with no clear cause, or chronic pruritus of unknown origin (CPUO)



  • Clinical & Molecular DX

oni

An Animation of Tectonic Plate Movement Over Millenia

Our planet has an outer layer made up of several plates, which move relative to one another. While we may take this knowledge for granted, this theory of p




oni

Genotyping and Health Monitoring in Aquaculture

Boost aquaculture efficiency with advanced tools for precise genetic differentiation, pathogen detection, and real-time health monitoring. Enhance fish hea




oni

[ENVS Lecture Series] A Rural Agrarian Reckoning: Multigenerational Farmers Seeking to Repair Soil, Agriculture, and Rural America Itself

Nov 21, 2024, 12pm EST

Sydney Giacalone, doctoral candidate in anthropology at Brown University, researches multigenerational farmers and ranchers across the U.S. who are transitioning away from conventional practices towards environmentally and social repairing approaches. This experience often involves questioning past education and internalized ideologies and learning to collaborate with nonhuman life to repair degraded ecologies. Join Giacaloneto learn more about her research.

BuildingCurtis Hall
Campus Location: Medford/Somerville campus
City: Medford, MA 02155
Campus: Medford/Somerville campus
Location Details: Curtis Hall Multipurpose Room (474 Boston Ave., Medford, MA)
Wheelchair Accessible (for in-person events): Yes
Open to Public: Yes
Primary Audience(s): Alumni and Friends
Event Type: Academic Date/Deadline, Lecture/Presentation/Seminar/Talk
Event Sponsor Details: Environmental Studies Program
Event Contact Name: Sinet Kroch
Event Contact Emailsinet.kroch@tufts.edu
RSVP Information: RSVP only needed for virtual attendants
More infotufts.zoom.us…



  • 2024/11/21 (Thu)

oni

Dealing with human rights complaints in unionized workplaces

Rhonda B. Levy and Douglas Sanderson examine The Human Rights Tribunal of Ontario preliminary hearing to determine whether allegations made under the Human Rights Code fell within the exclusive jurisdiction of a labour arbitrator, or whether the Tribunal had concurrent jurisdiction over employment-related human rights matters in a unionized workplace. 

Human Resources Director Canada




oni

AI and Workplace Monitoring

Bradford J. Kelley says employers are increasingly using AI and other technology to measure or enhance worker productivity and efficiency.

Asian Robotics Review

View




oni

Ontario, Canada’s Divisional Court Confirms Unionized Workplaces May Pursue Human Rights Claims Before Labour Arbitrator or Human Rights Tribunal




oni

Second Circuit Finds No Successor Liability for ERISA Withdrawal Where Employer Did Not Acquire Unionized Facility or Employees

On January 27, 2022, in New York State Teamsters Conference Pension and Retirement Fund v. C&S Wholesale Grocers, Inc., the Second Circuit joined the Third, Seventh, and Ninth Circuits in applying the doctrine of successor liability to claims for withdrawal liability under the Employee Retirement Income Security Act (ERISA). The court did not, however, find successor liability in this case because the acquiring company specifically did not acquire the facility or employees that triggered the liability.




oni

Questioning the “Diversity Questionnaire”

Watershed events in recent years like the resurgence of Black Lives Matter and #MeToo have galvanized organizations to step up their commitments in the Diversity, Equity and Inclusion space to gauge the “cultural temperature.” The persisting pandemic has raised the stakes. Accordingly, many companies seeking a roadmap for these efforts increasingly want to begin their journey with a snapshot of who’s in the mix and how they are feeling. Enter the “Diversity Questionnaire.”

Littler Principal Cindy-Ann Thomas and her guest, Littler Shareholder Alyesha Dotson:




oni

Department of State Announces Pilot Program for Renewal of Certain H-1B Nonimmigrant Visas

  • New Department of State pilot program is designed to allow certain Indian and Canadian nationals to renew their H-1B nonimmigrant visas in the United States.
  • Process is expected to expedite renewal process, and will be in effect from January 29, 2024 until April 1, 2024.




oni

FCA and PRA proposals for more intensive monitoring and public reporting of diversity are ground breaking

Natasha Adom discusses proposals from the FCA, PRA and Bank of England that would require financial services firms to provide more detailed monitoring and report of diversity and inclusion. 

IFA Magazine

View




oni

Congressional Democrats Want to Weaponize Federal Labor Law

Michael Lotito writes about unions and their allies’ attempts to sneak parts of the Protecting the Right to Organize (PRO) Act into the budget bill.

The Wall Street Journal

View (Subscription required.)




oni

Ontario, Canada Human Rights Tribunal Finds it Has Concurrent Jurisdiction with Labour Arbitrators to Decide Human Rights Claims in Unionized Workplaces

The Human Rights Tribunal of Ontario recently held a preliminary hearing to determine whether allegations made under the Human Rights Code (Code) fell within the exclusive jurisdiction of a labour arbitrator, or whether the Tribunal had concurrent jurisdiction over employment-related human rights matters in a unionized workplace.  




oni

Because Hamiltonians Labor for Equity: Natasha Jenkins ’07

Natasha Jenkins’ alma mater profiles her and her roles at Littler and as the president of Illinois’ Cook County Bar Association (CCBA). 

The Spectator

View




oni

Unionizing Student Athletes Called ‘Existential Threat’ by GOP

Tyler A. Sims says student athletes shouldn’t be classified as employees under the National Labor Relations Act.

Bloomberg Law

View (Subscription required.)




oni

Dartmouth basketball vote shows unionization ‘can happen anywhere,’ attorney says

Tyler Sims discusses the potential wage-and-hour implications of Dartmouth College’s men’s basketball team voting to form what may become the NCAA’s first-ever athlete labor union.

HR Dive

View (Subscription required.)




oni

What Unionized and Non-Unionized Employers Need to Know About OSHA's Worker Walkaround Rule