oni Does It Work? Monitoring The Effectiveness of Stream Management Practices In Alaska By www.fs.fed.us Published On :: Thu, 07 Sep 2006 12:00:36 PST The condition of aquatic habitat and the health of aquatic species, particularly salmon, are a significant concern in the Pacific Northwest. Land management agencies use fish and riparian guidelines intended to maintain or improve aquatic habitat. Full Article
oni Broadening Participation In Biological Monitoring: Handbook For Scientists and Managers By www.fs.fed.us Published On :: Fri, 08 Sep 2006 12:00:00 PST Participatory (collaborative, multiparty, citizen, volunteer) monitoring is a process that has been increasing in popularity and use in both developing and industrialized societies over the last several decades. It reflects the understanding that natural resource decisions are more effective and less controversial when stakeholders who have an interest in the results are involved in the process. An adequate number of such projects have now been organized, tried, and evaluated such that sufficient information exists to recommend a comprehensive approach to implementing such processes. This handbook was written for managers and scientists in the United States who are contemplating a participatory approach to monitoring biological resources, especially biodiversity. It is designed as a how-to manual with discussions of relevant topics, checklists of important considerations to address, and resources for further information. Worksheets for developing, implementing, and evaluating a monitoring plan are posted on a companion Web site. The subject matter is divided into 3 stages of a monitoring project encompassing a total of 22 topical modules. These modules can be used in any sequence on an ongoing basis. Stages and modules include (1) planning-documentation, goals, indicators, collaboration, decisions, context, organization, participants, communication, incentives, design, and resources; (2) implementation-training, safety, fieldwork, sampling, data, and quality; and (3) followthrough-analysis, reporting, evaluation, and celebrations. Collaboration always involves colearning, so documenting choices, plans, and activities with the Web site worksheets is integral to the manual's effectiveness. Full Article
oni Monitoring Forests From Space: Quantifying Forest Change By Using Satellite Data By www.fs.fed.us Published On :: Mon, 04 Dec 2006 12:00:36 PST Change is the only constant in forest ecosystems. Quantifying regional-scale forest change is increasingly done with remote sensing, which relies on data sent from digital camera-like sensors mounted to Earth-orbiting satellites. Through remote sensing, changes in forests can be studied comprehensively and uniformly across time and space. Full Article
oni Sustainable Forestry In Theory and Practice: Recent Advances In Inventory and Monitoring, Statistics and Modeling, Information and Knowledge Management, and Policy Science By www.fs.fed.us Published On :: Fri, 22 Dec 2006 12:00:00 PST The importance to society of environmental services, provided by forest ecosystems, has significantly increased during the last few decades. A growing global concern with the deterioration of forests, beginning perhaps most noticeably in the 1980s, has led to an increasing public awareness of the environmental, cultural, economic, and social values that forests provide. Around the world, ideas of sustainable, close-to-nature, and multi-functional forestry have progressively replaced the older perception of forests as only a source for timber. The international impetus to protect and sustainably manage forests has come from global initiatives at management, conservation, and sustainable development related to all types of forests and forestry. A few of the more notable initiatives include: the 1992 Earth Summit in Rio de Janeiro, Brazil (United Nations Conference on Environment and Development, UNCED); regional follow-ups to the Earth Summit such as the Montreal Process and Helsinki Accords; the forest elements of the Convention on Biological Diversity (CBD); and the Framework Convention on Climate Change (FCCC). Full Article
oni Area-specific recreation use estimation using the national visitor use monitoring program data By www.fs.fed.us Published On :: Thu, 16 Aug 2007 09:00:00 PST Estimates of national forest recreation use are available at the national, regional, and forest levels via the USDA Forest Service National Visitor Use Monitoring (NVUM) program. In some resource planning and management applications, analysts desire recreation use estimates for subforest areas within an individual national forest or for subforest areas that combine portions of several national forests. In this research note we have detailed two approaches whereby the NVUM sampling data may be used to estimate recreation use for a subforest area within a single national forest or for a subforest area combining portions of more than one national forest. The approaches differ in their data requirements, complexity, and assumptions. In the "new forest" approach, recreation use is estimated by using NVUM data obtained only from NVUM interview sites within the area of interest. In the "all-forest information" approach, recreation use is estimated by using sample data gathered on all portions of the national forest(s) that contain the area of interest. Full Article
oni Regional population monitoring of the marbled murrelet: field and analytical methods By www.fs.fed.us Published On :: Thu, 16 Aug 2007 09:00:00 PST The marbled murrelet (Brachyramphus marmoratus) ranges from Alaska to California and is listed under the Endangered Species Act as a threatened species in Washington, Oregon, and California. Marbled murrelet recovery depends, in large part, on conservation and restoration of breeding habitat on federally managed lands. A major objective of the Northwest Forest Plan (the Plan) is to conserve and restore nesting habitat that will sustain a viable marbled murrelet population. Under the Plan, monitoring is an essential component and is designed to help managers understand the degree to which the Plan is meeting this objective. This report describes methods used to assess the status and trend of marbled murrelet populations under the Plan. Full Article
oni Ozone injury in west coast forests: 6 years of monitoring By www.fs.fed.us Published On :: Thu, 16 Aug 2007 09:30:00 PST Six years of monitoring for ozone injury by the Pacific Northwest Research Station Forest Inventory and Analysis Program are reported. The methods used to evaluate injury, compute an injury index, and estimate risk are described. Extensive injury was detected on ozone biomonitoring sites for all years in California, with ponderosa and Jeffrey pines, mugwort, skunkbush, and blue elderberry showing injury. Little or no injury was detected in Oregon and Washington. The relation of observed injury to ambient ozone levels is discussed. The areas with the highest modeled risk of ozone injury are the areas east of Los Angeles, the southern Sierra Nevada, and portions of the central coast. Full Article
oni National Visitor Use Monitoring implementation in Alaska By www.fs.fed.us Published On :: Tue, 19 Feb 2008 11:15:00 PST The USDA Forest Service implemented the National Visitor Use Monitoring (NVUM) program across the entire National Forest System (NFS) in calendar year 2000. The primary objective of the NVUM program is to develop reliable estimates of recreation use on NFS lands via a nationally consistent, statistically valid sampling approach. Secondary objectives of NVUM are to characterize recreation visits, collect data in support of regional economic analyses, and gauge national forest visitor satisfaction. We document and review the round 1 NVUM implementation in the USDA Forest Service Alaska Region (R-10) with examination of the R-10 prework, sample day implementation, survey completion rates, sampling at cabins, boat docks, and air carriers; and the NVUM expansion weights assigned to survey cases. Several opportunities to improve the implementation of the standard NVUM protocols in R-10 are identified. Full Article
oni Lichen bioindication of biodiversity, air quality, and climate: baseline results from monitoring in Washington, Oregon, and California By www.fs.fed.us Published On :: Thu, 3 Apr 2008 06:31:00 PST Lichens are highly valued ecological indicators known for their sensitivity to a wide variety of environmental stressors like air quality and climate change. This report summarizes baseline results from the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) Lichen Community Indicator covering the first full cycle of data collection (1998-2001, 2003) for Washington, Oregon, and California. During this period, FIA conducted 972 surveys of epiphytic macrolichen communities for monitoring both spatial and long-term temporal trends in forest health. Major research findings are presented with emphasis on lichen biodiversity as well as bioindication of air quality and climate. Considerable effort is devoted to mapping geographic patterns and defining lichen indicator species suitable for estimating air quality and climate. Full Article
oni A protocol using coho salmon to monitor Tongass National Forest Land and Resource Management Plan standards and guidelines for fish habitat By www.fs.fed.us Published On :: Thu, 22 May 2008 14:00:00 PST We describe a protocol to monitor the effectiveness of the Tongass Land Management Plan (TLMP) management standards for maintaining fish habitat. The protocol uses juvenile coho salmon (Oncorhynchus kisutch) in small tributary streams in forested watersheds. We used a 3-year pilot study to develop detailed methods to estimate juvenile salmonid populations, measure habitat, and quantitatively determine trends in juvenile coho salmon abundance over 10 years. Coho salmon have been shown to be sensitive to habitat alterations, and we use coho salmon parr as the primary indicator in the protocol. A priori criteria for type I and type II error rates, effect size, and sample sizes for the protocol were derived with estimates of variance computed from the 3-year pilot study. The protocol is designed to detect trends in abundance of coho salmon parr, as well as coho salmon fry and Dolly Varden (Salvelinus malma), in small streams managed according to TLMP standards and guidelines and to compare these to trends in unmanaged (old-growth) watersheds. Trends are adjusted to account for statistically significant habitat covariates. This information provides an important element in monitoring land management practices in the Tongass National Forest. The methods we describe may have application to monitoring protocols elsewhere for fish populations and land management practices. Full Article
oni Northwest Forest Plan-the first 10 years (1994-2003): Socioeconomic monitoring of the Okanogan-Wenatchee National Forest and five local communities By www.fs.fed.us Published On :: Tue, 08 Jul 2008 09:10:00 PST This report examines socioeconomic changes that occurred between 1990 and 2003 associated with implementation of the Northwest Forest Plan (the Plan) in and around lands managed by the Okanogan-Wenatchee National Forest in Washington state. Our findings are based on quantitative data from the U.S. census, the USDA Forest Service and other federal databases, historical documents, and interviews with Forest Service employees and members of five case study communities: Naches Valley, Cashmere, Entiat, Twisp, and the Upper Okanogan Valley. We explore how the Plan affected the flow of socioeconomic benefits associated with the Okanogan-Wenatchee National Forest, such as the production of forest commodities and forest-based recreation, agency jobs, procurement contract work for ecosystem management activities, grants for community economic assistance, payments to county governments, and opportunities for collaborative forest management. The greatest socioeconomic change stemming from the national forest during the study period was the sharp decline in timber harvest activities, a change that had been underway prior to the Plan. This decline not only affected timber industry jobs in local communities, but also resulted in declining agency budgets and staff reductions. Communities' responses differed. Communities with greater economic diversity were able to absorb the changes in forest management, whereas communities more heavily dependent on timber experienced an additional destabilizing effect. Full Article
oni Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests. By www.fs.fed.us Published On :: Tue 12 Oct 2010 14:50:00 PST Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of the United States, to help monitor the status of forested ecosystems. Full Article
oni Estimation of national forest visitor spending averages from National Visitor Use Monitoring: round 2. By www.fs.fed.us Published On :: Wed, 17 Jul 2013 10:38:00 PST The economic linkages between national forests and surrounding communities have become increasingly important in recent years. One way national forests contribute to the economies of surrounding communities is by attracting recreation visitors who, as part of their trip, spend money in communities on the periphery of the national forest. We use survey data collected from visitors to all units in the National Forest System to estimate the average spending per trip of national forest recreation visitors engaged in various types of recreation trips and activities. Average spending of national forest visitors ranges from about $33 per party per trip for local residents on day trips to more than $983 per party per trip for visitors downhill skiing on national forest land and staying overnight in the local national forest area. We report key parameters to complete economic contribution analysis for individual national forests and for the entire National Forest System. Full Article
oni Silviculture and monitoring guidelines for integrating restoration of dry mixed-conifer forest and spotted owl habitat management in the eastern Cascade Range. By www.fs.fed.us Published On :: Thur, 03 Mar 2016 9:55:00 PST This report addresses the need for developing consistent regional guidelines for stand-level management that integrates goals and objectives for dry forest restoration and habitat management for the northern spotted owl. Full Article
oni Economic and environmental benefits of community-scale cordwood hydronic heaters in Alaska—three case studies By www.fs.fed.us Published On :: Thur, 03 Mar 2016 9:55:00 PST Over the past decade, the use of wood for thermal energy in Alaska has grown significantly. Since 2000, nearly 30 new thermal wood-energy installations in Alaska have been established. Full Article
oni Historical Forest Structure, Composition, and Spatial Pattern in Dry Conifer Forests of the Western Blue Mountains, Oregon. By www.fs.fed.us Published On :: Thu, 16 Nov 2017 08:00:00 PST In frequent-fire forests of the interior Western United States, historical (prefire suppression) conditions are often used as a reference to set management objectives, guide prescriptions, and monitor treatment effectiveness. We quantified the historical size, density, composition, and spatial patterns of dry mixed-conifer forests in the Blue Mountains of Oregon to establish reference conditions that could be used for ongoing forest-restoration efforts. Full Article
oni Variation In Shrub and Herb Cover and Production On Ungrazed Pine and Sagebrush Sites In Eastern Oregon: A 27-Year Photomonitoring Study By www.fs.fed.us Published On :: Fri, 30 Mar 2007 09:30:00 PST Study objectives were to evaluate yearly fluctuations in herbage canopy cover and production to aid in defining characteristics of range condition guides. Sites are located in the forested Blue Mountains of central Oregon. They were selected from those used to develop range condition guides where soil, topographic, and vegetation parameters were measured as a characterization of best range condition. Plant community dominants were ponderosa pine/pinegrass, ponderosa pine/bitterbrush/Idaho fescue savanna, low sagebrush/bluebunch wheatgrass, and rigid sagebrush scabland. None of the sites were grazed during the previous 30 years or during the 27-year study. Each location was permanently marked by fence posts, and a meter board was placed 10 m down an established transect line. Photographs (color slides) were taken down the transect with closeups left and right of the meter board. Sampling was limited to August 1-4 each year when canopy cover and herbage production were determined. Both total canopy cover and herbage production varied by about a 2.4-fold difference on each site over the 27 years. Apparently "good range condition" may be something of a "running target" and lacks a well-defined set of parameters. Canopy cover is a poor parameter for characterizing range condition. Three of the four plant communities were dominated by bunchgrasses. Abundance of seedheads is commonly used to indicate good range health. But on these sites, seedheads were not produced about half the time. Because these sites were in "good range condition," lack of seedhead production may indicate maximum competition in the community. Maximum competition and maximum vigor do not seem to be synonymous. These bunchgrass communities varied in their greenness on the first of August each year from cured brown to rather vibrant green suggesting important annual differences in phenology. The pinegrass community, being dominated by rhizomatous species, showed surprising variance in seedhead production. Pinegrass did not flower, but Wheeler's bluegrass, lupine, and Scouler's woolyweed were quite variable, averaging inflorescences only 75 percent of the time. Full Article
oni Field Survey of Growth and Colonization of Nonnative Trees On Mainland Alaska By www.fs.fed.us Published On :: Mon, 17 Apr 2006 15:25:36 PST Six of nine nonnative boreal conifers in three genera (Abies, Larix, and Pinus) regenerated in 11 to 31 years after they were introduced to mainland Alaska. Lodgepole pine (Pinus contorta var. latifolia Engel.) and the Siberian larches (Larix sibirica Ledeb. and L. sukaczewii N. Dyl.) were the most widely introduced species and will likely be the first nonnative conifers to naturalize. Siberian larch grew up to six times more stem volume than white spruce in the first 40 years on upland sites, but was susceptible to the larch sawfly and a blue stain pathogen carried by bark beetles. On productive sites, lodgepole pine appeared to grow more stem wood than white spruce for about 35 years after planting. Snowshoe hares and moose were the most serious pests of the nonnative conifers. Balsam fir (Abies balsamea (L.) Mill.) was the only species to regenerate in an established moss understory. Growth and age relationships were negative for all adequately sampled nonnative conifers and positive for native white spruce (Picea glauca (Moench) Voss). Data were insufficient to assess niche availability for commercial-use of productive nonnative conifers in mixed stands in Alaska. Survey results indicate that introduction and naturalization of noninvasive tree species may improve the diversity, stability, and productivity of managed forest ecosystems. Full Article
oni How to see Starlink satellites over Northern Ireland tonight By www.belfastlive.co.uk Published On :: Fri, 8 May 2020 15:59:52 +0000 Weather permitting, the string of satellites will be quite the sight over Northern Ireland's night sky Full Article News
oni Media Monitors Shows Advertising Priorities During The Pandemic By www.allaccess.com Published On :: Thu, 07 May 2020 01:20:01 -0700 New analysis from MEDIA MONITORS shows that the COVID-19 pandemic has significantly altered the nature of radio advertising over the past several weeks. MEDIA MONITORS Pres. PHILIPPE … more Full Article
oni John Harrington-WHAT WE USE - CPU’s and Monitors By www.assignmentconstruct.com Published On :: 2008-01-24T15:40:00+00:00 Here’s a video segment on the CPU’s and Monitor Kit that we use. A transcription of the video is available after the jump. Full Article
oni Prince's Never-Before-Aired Demo Show For SiriusXM To Air Tonight By www.allaccess.com Published On :: Fri, 01 May 2020 05:55:08 -0700 The previously-announced debut of the pilot show PRINCE produced for SIRIUS SATELLITE RADIO in 2005 (NET NEWS 4/29) will air TONIGHT (5/1) on the limited-run PRINCE CHANNEL, premiering at 7p … more Full Article
oni Smithsonian Institution’s National Museum of African Art By wordpress.org Published On :: Fri, 12 Sep 2014 13:12:17 +0000 The Smithsonian Institution’s National Museum of African Art focus is to inspire conversations about the beauty, power, and diversity of African arts and cultures worldwide. Full Article Featured WordPress.org Culture Museum Non-profit Travel
oni Australian study: Many home blood pressure monitors not validated for accuracy By newsroom.heart.org Published On :: Mon, 13 Apr 2020 09:00:00 GMT Research Highlights: Most home blood pressure monitoring devices sold in Australia by global e-commerce sites such as Amazon and eBay have not been validated (tested for accuracy). Using an unvalidated device could lead to incorrect at-home blood... Full Article
oni Telemonitoring plus phone counseling lowers blood pressure among black and Hispanic stroke survivors By newsroom.heart.org Published On :: Fri, 21 Feb 2020 19:15:00 GMT Research Highlights: Minority stroke survivors experience better blood pressure control when lifestyle counseling by phone from a nurse is added to home blood pressure telemonitoring. Improved blood pressure control could lower strokes and stroke ... Full Article
oni Antonio Petrucelli’s Maps (Edes Harrison & Artzybasheff) By makingmaps.net Published On :: Mon, 03 Feb 2020 17:54:15 +0000 Antonio Petrucelli’s name is probably unfamiliar to map people, although it shouldn’t be. Petrucelli was one of the artists whose work was commonly found on the cover (and inside) Fortune magazine from the 1930s through the 1950s. Petrucelli collaborated with Richard Edes Harrison and was a contemporary of Boris Artzybasheff. Like Edes Harrison and Artzybasheff, […] Full Article 09 Map Symbolization Map History
oni How To Design An Iconic Logo? By feedproxy.google.com Published On :: Fri, 08 May 2020 11:32:25 +0000 Have you ever wondered why Adidas, Nike, Apple, Unilever, and many others have such remarkable logotypes? What is the inner side of their success? We have an answer! These companies have followed certain rules to create meaningful and interesting logos. We analyzed their experience and chose the 5 best tips for an iconic logo design.... Full Article Design Logos logo logo design logo designs logos
oni The chronic and evolving neurological consequences of traumatic brain injury By feedproxy.google.com Published On :: Wed, 07 Aug 2019 15:48:01 EDT Traumatic brain injury (TBI) can have lifelong and dynamic effects on health and wellbeing. Research on the longterm consequences emphasises that, for many patients, TBI should be conceptualised as a chronic health condition. Evidence suggests that functional outcomes after TBI can show improvement or deterioration up to two decades after injury, and rates of all-cause mortality remain elevated for many years. Furthermore, TBI represents a risk factor for a variety of neurological illnesses, including epilepsy, stroke, and neurodegenerative disease. With respect to neurodegeneration after TBI, post-mortem studies on the long-term neuropathology after injury have identified complex persisting and evolving abnormalities best described as polypathology, which includes chronic traumatic encephalopathy. Despite growing awareness of the lifelong consequences of TBI, substantial gaps in research exist. Improvements are therefore needed in understanding chronic pathologies and their implications for survivors of TBI, which could inform long-term health management in this sizeable patient population. Full Article
oni Dispositional optimism and cognitive functioning following traumatic brain injury By feedproxy.google.com Published On :: Tue, 24 Sep 2019 13:07:37 EDT The association of dispositional optimism with health-related factors has been well established in several clinical populations, but little is known about the role of optimism in recovery after traumatic brain injury (TBI). Given the high prevalence of cognitive complaints after TBI, the present study examined the association between optimism and cognitive functioning after TBI. Full Article
oni Spread the Word: Beautiful Testimonial UI Examples By speckyboy.com Published On :: Mon, 27 Apr 2020 10:19:28 +0000 Everybody wants to receive positive feedback for a job well done. That’s one reason why so many businesses include testimonials on their website. Beyond the warm and fuzzy feelings, testimonials... The post Spread the Word: Beautiful Testimonial UI Examples appeared first on Speckyboy Design Magazine. Full Article CSS Featured JavaScript UI Design CSS Layouts CSS Snippets JavaScript Snippets Web Design Inspiration
oni Court approves pilot program to test electronic search warrants By feedproxy.google.com Published On :: Wed, 06 May 2020 20:37:15 PDT The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop... Full Article Public Safety
oni Ironic Posters of Adventures at Home By www.fubiz.net Published On :: Wed, 06 May 2020 13:17:36 +0000 Pendant que nous ne pouvons plus voyager à travers le monde et que nous sommes invités à rester à la maison, le « Bureau de Tourisme du Coronavirus » en profite pour dévoiler sa dernière campagne. Ce bureau de tourisme fictif et ironique a été imaginé par Jennifer Baer, une graphiste californienne, pour promouvoir la distanciation sociale. […] Full Article Graphic Illustration Art pandemic posters
oni Wildlife in Patagonia Captured by Konsta Punkka By www.fubiz.net Published On :: Fri, 08 May 2020 13:23:53 +0000 En 2016, la route du photographe finlandais Konsta Punkka croisait celle de deux pumas. Il se situait alors au cœur de la Patagonie, au Chili, dans le vaste parc national Torres del Paine. Spécialiste des clichés d’aventure et d’animaux dans leur habitat naturel, le photographe a passé une dizaine de jours à suivre les félins pour tirer de […] Full Article Nature Photography Travel
oni Court approves pilot program to test electronic search warrants By feedproxy.google.com Published On :: Wed, 06 May 2020 20:37:15 PDT The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop procedures for the use of electronic search warrants.Electronic search warrants will reduce the time required to obtain warrants, reduce travel time by law enforcement and make more effective use of judges’ time, according to the order. Paper warrants require law enforcement to fill out application forms and then leave the scene of the potential search and drive to find a judge, either at a courthouse during business hours or their home after hours. If the judge grants the warrant, then the officer has to drive back to the scene to execute it. The electronic warrants can be submitted to a judge from a squad car computer, which is more efficient for law enforcement and the judges. The pilot program will be evaluated by the court annually and will continue until further notice. Fourth Judicial District Chief Judge Jeff Larson, who was on the advisory committee to develop recommendations for the new process, talked about the project, which will start in the next few weeks.Page County Chief Deputy Charles McCalla, 6th Judicial Associate District Judge Nicholas Scott, Linn County Sheriff Capt. Greg McGivern and Marion police Lt. Scott Elam also provided their thoughts about electronic search warrants. Q: Iowa courts started going paperless in 2010, so why did it take so long to get a pilot program for electronic search warrants? A: Larson: It had been discussed at various levels since (the electronic document management system) started. We should take advantage of the electronic process because it will save us money. Most law enforcement agencies are now used to filing electronic citations from their patrol cars and offices. There may have been some pushback a few years ago because some counties or offices didn’t have computer scanners and needed technology. Now, the rural offices have that technology.Q: As a task force member working on this program, what were the hurdles?A: Larson: It was just working through the procedural issues to make sure there would be a safeguard throughout the process. When a search warrant is needed, law enforcement has to fill out the search warrant package, including the application with all the pertinent information, and submit it to a magistrate judge, associate or district judge in their judicial district. Then the officer or deputy can just call the judge to alert him/her to the warrant and the judge can ask for any additional information needed. The judge then administers the oath of office over the phone and signs off or denies the warrant. Law enforcement doesn’t have to leave the crime scene and can print off the warrant from their squad car computer.The process of going to electronic warrants started in 2017, when the lawmakers amended the law to allow those to be submitted electronically, and then in 2018, the state court administrator’s office set up an advisory committee to develop recommendations. Q: What has been the process to get a search warrant?A: Larson: Law enforcement would have to leave the scene, fill out paperwork and then, many times, travel miles to go to the courthouse to have the judge sign it or if it’s after hours, go to a judge’s home. The officer may not be in the same county as the courthouse where the judge works or where the judge lives. (It) can take a lot of time. The process is way overdue. Q: Page County Sheriff’s Chief Deputy Charles McCalla, what do you see as the biggest advantage for filing them electronically?A: McCalla: The smaller counties have limited manpower, and some of the judges, like in Mills County, may be 60 to 70 miles away if a search warrant is needed after hours. Just traveling across the county can take time, depending where you are. At a minimum, we probably have to drive 30 minutes and up to an hour to get to a judge. This will save us time, money for travel and provide safety because we can stay at the scene to ensure the evidence hasn’t been tampered with. Q: Is there a recent incident where an electronic search warrant may have helped? A: McCalla: A few weeks ago, there was a theft report for a stolen chain saw and deputies went to the home and saw guns all over the house and they knew the guy who lived there had been convicted. They didn’t want to tip him off, so they just left the scene and went to get a search warrant. Luckily, the evidence was still there when they came back. They found about 90 guns. Q: How do you feel about being the “guinea pigs” for the process?A: McCalla: Happy to be. As law enforcement, we’re natural fixers. We find solutions. And this is an idea time to use the process during the COVID-19 pandemic to keep everyone safe. We won’t have to have any face-to-face contact with the judges. Q: Is Linn County excited about the program, once it’s tested and used across the state?A: Scott: I think many of us in the criminal justice system are eagerly awaiting the results of the pilot. They have the potential to make the system more efficient. It is in the interest of the police and the suspect, who is often detained pending a warrant, to get the search warrant application reviewed by a judge as soon as possible. A potential benefit is that officers could also use those more often, which protects citizens from unlawful search and seizures if a judge first reviews the evidence. A: McGivern: I believe the implementation will be a much faster and efficient process for deputies. Like any new process, there may need to be some revisions that will have to be worked out, but I look forward to it. A: Elam: We’ve done it this way for a long time, and it can be a bit of a haul for us, depending who’s on call (among the judges) — after hours. It’s nice to see there’s a pilot. The concern would be if something goes wrong in the process. If the internet is down or something else. Now, we have to go from Marion to the Linn County Courthouse. Then we go to the county attorney’s office to get a prosecutor to review the warrant and then find a judge (in courthouse during business hours). That takes some time. If you can type out the application from your car right at the scene, it would help with details on the warrant — describing the structure or property needing be searched. I just hope they work out all the bugs first. Comments: (319) 398-8318; trish.mehaffey@thegazette.com Full Article Public Safety
oni Auphonic Leveler 1.8 and Auphonic Multitrack 1.4 Updates By feedproxy.google.com Published On :: Tue, 06 Jun 2017 06:50:06 +0000 Today we released free updates for the Auphonic Leveler Batch Processor and the Auphonic Multitrack Processor with many algorithm improvements and bug fixes for Mac and Windows. Changelog Linear Filtering Algorithms to avoid Asymmetric Waveforms: New zero-phase Adaptive Filtering Algorithms to avoid asymmetric waveforms. In asymmetric waveforms, the positive and negative amplitude values are disproportionate - please see Asymmetric Waveforms: Should You Be Concerned?. Asymmetrical waveforms are quite natural and not necessarily a problem. They are particularly common on recordings of speech, vocals and can be caused by low-end filtering. However, they limit the amount of gain that can be safely applied without introducing distortion or clipping due to aggressive limiting. Noise Reduction Improvements: New and improved noise profile estimation algorithms and bug fixes for parallel Noise Reduction Algorithms. Processing Finished Notification on Mac: A system notification (including a short glass sound) is now displayed on Mac OS when the Auphonic Leveler or Auphonic Multitrack has finished processing - thanks to Timo Hetzel. Improved Dithering: Improved dithering algorithms - using SoX - if a bit-depth reduction is necessary during file export. Auphonic Multitrack Fixes: Fixes for ducking and background tracks and for very short music tracks. New Desktop Apps Documentation: The documentation of our desktop apps is now integrated in our new help system: see Auphonic Leveler Batch Processor and Auphonic Multitrack Processor. Bug Fixes and Audio Algorithm Improvements: This release also includes many small bug fixes and all audio algorithms come with improvements and updated classifiers using the data from our Web Service. About the Auphonic Desktop Apps We offer two desktop programs which include our audio algorithms only. The algorithms will be computed offline on your device and are exactly the same as implemented in our Web Service. The Auphonic Leveler Batch Processor is a batch audio file processor and includes all our (Singletrack) Audio Post Production Algorithms. It can process multiple productions at once. Auphonic Multitrack includes our Multitrack Post Production Algorithms and requires multiple parallel input audio tracks, which will be analyzed and processed individually as well as combined to create one final mixdown. Upgrade now Everyone is encouraged to download the latest binaries: Auphonic Leveler Download Auphonic Multitrack Download Please let us know if you have any questions or feedback! Full Article Development News
oni Facebook Live Streaming and Audio/Video Hosting connected to Auphonic By feedproxy.google.com Published On :: Thu, 22 Jun 2017 05:55:57 +0000 Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services. The following workflows are possible with Facebook and Auphonic: Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic. Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User. Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users). Connect to Facebook First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button. Select if you want to connect to your personal Facebook User or to a Facebook Page: It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations). Import (Live) Videos from Facebook to Auphonic Facebook Live is an easy (and free) way to stream live videos: We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic: This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services. Export from Auphonic to Facebook Similar to Youtube, it is possible to use Facebook for media file hosting. Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook: Options for the Facebook export: Distribution Settings Post to News Feed: The exported video is posted directly to your news feed / timeline. Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want). Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want). Embeddable Choose if the exported video should be embeddable in third-party websites. It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want. If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images. Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator. Auphonic Title and Description metadata fields are exported to Facebook as well. If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions. See the example below. Facebook Video Hosting Example with Audiogram and Automatic Captions Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos. In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook. Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video: It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users): https://www.facebook.com/auphonic/videos/1687244844638091/ Important Note: Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later! On Facebook Pages, you can see the processing progress in your Video Library. Conclusion Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic. Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API. Please contact us if you have any questions or further ideas! Full Article Audio News
oni Auphonic Audio Inspector Release By feedproxy.google.com Published On :: Mon, 20 Nov 2017 09:01:02 +0000 At the Subscribe 9 Conference, we presented the first version of our new Audio Inspector: The Auphonic Audio Inspector is shown on the status page of a finished production and displays details about what our algorithms are changing in audio files. A screenshot of the Auphonic Audio Inspector on the status page of a finished Multitrack Production. Please click on the screenshot to see it in full resolution! It is possible to zoom and scroll within audio waveforms and the Audio Inspector might be used to manually check production result and input files. In this blog post, we will discuss the usage and all current visualizations of the Inspector. If you just want to try the Auphonic Audio Inspector yourself, take a look at this Multitrack Audio Inspector Example. Inspector Usage Control bar of the Audio Inspector with scrollbar, play button, current playback position and length, button to show input audio file(s), zoom in/out, toggle legend and a button to switch to fullscreen mode. Seek in Audio Files Click or tap inside the waveform to seek in files. The red playhead will show the current audio position. Zoom In/Out Use the zoom buttons ([+] and [-]), the mouse wheel or zoom gestures on touch devices to zoom in/out the audio waveform. Scroll Waveforms If zoomed in, use the scrollbar or drag the audio waveform directly (with your mouse or on touch devices). Show Legend Click the [?] button to show or hide the Legend, which describes details about the visualizations of the audio waveform. Show Stats Use the Show Stats link to display Audio Processing Statistics of a production. Show Input Track(s) Click Show Input to show or hide input track(s) of a production: now you can see and listen to input and output files for a detailed comparison. Please click directly on the waveform to switch/unmute a track - muted tracks are grayed out slightly: Showing four input tracks and the Auphonic output of a multitrack production. Please click on the fullscreen button (bottom right) to switch to fullscreen mode. Now the audio tracks use all available screen space to see all waveform details: A multitrack production with output and all input tracks in fullscreen mode. Please click on the screenshot to see it in full resolution. In fullscreen mode, it’s also possible to control playback and zooming with keyboard shortcuts: Press [Space] to start/pause playback, use [+] to zoom in and [-] to zoom out. Singletrack Algorithms Inspector First, we discuss the analysis data of our Singletrack Post Production Algorithms. The audio levels of output and input files, measured according to the ITU-R BS.1770 specification, are displayed directly as the audio waveform. Click on Show Input to see the input and output file. Only one file is played at a time, click directly on the Input or Output track to unmute a file for playback: Singletrack Production with opened input file. See the first Leveler Audio Example to try the audio inspector yourself. Waveform Segments: Music and Speech (gold, blue) Music/Speech segments are displayed directly in the audio waveform: Music segments are plotted in gold/yellow, speech segments in blue (or light/dark blue). Waveform Segments: Leveler High/No Amplification (dark, light blue) Speech segments can be displayed in normal, dark or light blue: Dark blue means that the input signal was very quiet and contains speech, therefore the Adaptive Leveler has to use a high amplification value in this segment. In light blue regions, the input signal was very quiet as well, but our classifiers decided that the signal should not be amplified (breathing, noise, background sounds, etc.). Yellow/orange background segments display leveler fades. Background Segments: Leveler Fade Up/Down (yellow, orange) If the volume of an input file changes in a fast way, the Adaptive Leveler volume curve will increase/decrease very fast as well (= fade) and should be placed in speech pauses. Otherwise, if fades are too slow or during active speech, one will hear pumping speech artifacts. Exact fade regions are plotted as yellow (fade up, volume increase) and orange (fade down, volume decrease) background segments in the audio inspector. Horizontal red lines display noise and hum reduction profiles. Horizontal Lines: Noise and Hum Reduction Profiles (red) Our Noise and Hiss Reduction and Hum Reduction algorithms segment the audio file in regions with different background noise characteristics, which are displayed as red horizontal lines in the audio inspector (top lines for noise reduction, bottom lines for hum reduction). Then a noise print is extracted in each region and a classifier decides if and how much noise reduction is necessary - this is plotted as a value in dB below the top red line. The hum base frequency (50Hz or 60Hz) and the strength of all its partials is also classified in each region, the value in Hz above the bottom red line indicates the base frequency and whether hum reduction is necessary or not (no red line). You can try the singletrack audio inspector yourself with our Leveler, Noise Reduction and Hum Reduction audio examples. Multitrack Algorithms Inspector If our Multitrack Post Production Algorithms are used, additional analysis data is shown in the audio inspector. The audio levels of the output and all input tracks are measured according to the ITU-R BS.1770 specification and are displayed directly as the audio waveform. Click on Show Input to see all the input files with track labels and the output file. Only one file is played at a time, click directly into the track to unmute a file for playback: Input Tracks: Waveform Segments, Background Segments and Horizontal Lines Input tracks are displayed below the output file including their track names. The same data as in our Singletrack Algorithms Inspector is calculated and plotted separately in each input track: Waveform Segments with different colors: Music and Speech Segments (gold, blue), Leveler High/No Amplification (dark blue, light blue) Background Segments: Leveler Fade Up/Down (yellow, orange) Horizontal Lines: Noise and Hum Reduction Profiles Noise and Hum Reduction Profiles (red) Output Waveform Segments: Multiple Speakers and Music Each speaker is plotted in a separate, blue-like color - in the example above we have 3 speakers (normal, light and dark blue) and you can see directly in the waveform when and which speaker is active. Audio from music input tracks are always plotted in gold/yellow in the output waveform, please try to not mix music and speech parts in music tracks (see also Multitrack Best Practice)! You can try the multitrack audio inspector yourself with our Multitrack Audio Inspector Example or our general Multitrack Audio Examples. Ducking, Background and Foreground Segments Music tracks can be set to Ducking, Foreground, Background or Auto - for more details please see Automatic Ducking, Foreground and Background Tracks. Ducking Segments (light, dark orange) In Ducking, the level of a music track is reduced if one of the speakers is active, which is plotted as a dark orange background segment in the output track. Foreground music parts, where no speaker is active and the music track volume is not reduced, are displayed as light orange background segments in the output track. Background Music Segments (dark orange background) Here the whole music track is set to Background and won’t be amplified when speakers are inactive. Background music parts are plotted as dark organge background segments in the output track. Foreground Music Segments (light orange background) Here the whole music track is set to Foreground and its level won’t be reduced when speakers are active. Foreground music parts are plotted as light organge background segments in the output track. You can try the ducking/background/foreground audio inspector yourself: Fore/Background/Ducking Audio Examples. Audio Search, Chapters Marks and Video Audio Search and Transcriptions If our Automatic Speech Recognition Integration is used, a time-aligned transcription text will be shown above the waveform. You can use the search field to search and seek directly in the audio file. See our Speech Recognition Audio Examples to try it yourself. Chapters Marks Chapter Mark start times are displayed in the audio waveform as black vertical lines. The current chapter title is written above the waveform - see “This is Chapter 2” in the screenshot above. A video production with output waveform, input waveform and transcriptions in fullscreen mode. Please click on the screenshot to see it in full resolution. Video Display If you add a Video Format or Audiogram Output File to your production, the audio inspector will also show a separate video track in addition to the audio output and input tracks. The video playback will be synced to the audio of output and input tracks. Supported Audio Formats We use the native HTML5 audio element for playback and the aurora.js javascript audio decoders to support all common audio formats: WAV, MP3, AAC/M4A and Opus These formats are supported in all major browsers: Firefox, Chrome, Safari, Edge, iOS Safari and Chrome for Android. FLAC FLAC is supported in Firefox, Chrome, Edge and Chrome for Android - see FLAC audio format. In Safari and iOS Safari, we use aurora.js to directly decode FLAC files in javascript, which works but uses much more CPU compared to native decoding! ALAC ALAC is not supported by any browser so far, therefore we use aurora.js to directly decode ALAC files in javascript. This works but uses much more CPU compared to native decoding! Ogg Vorbis Only supported by Firefox, Chrome and Chrome for Android - for details please see Ogg Vorbis audio format. We suggest to use a recent Firefox or Chrome browser for best performance. Decoding FLAC and ALAC files also works in Safari and iOS with the help of aurora.js, but javascript decoders need a lot of CPU and they sometimes have problems with exact scrolling and seeking. Please see our blog post Audio File Formats and Bitrates for Podcasts for more details about audio formats. Mobile Audio Inspector Multiple responsive layouts were created to optimize the screen space usage on Android and iOS devices, so that the audio inspector is fully usable on mobile devices as well: tap into the waveform to set the playhead location, scroll horizontally to scroll waveforms, scroll vertically to scroll between tracks, use zoom gestures to zoom in/out, etc. Unfortunately the fullscreen mode is not available on iOS devices (thanks to Apple), but it works on Android and is a really great way to inspect everything using all the available screen space: Audio inspector in horizontal fullscreen mode on Android. Conclusion Try the Auphonic Audio Inspector yourself: take a look at our Audio Example Page or play with the Multitrack Audio Inspector Example. The Audio Inspector will be shown in all productions which are created in our Web Service. It might be used to manually check production result/input files and to send us detailed feedback about audio processing results. Please let us know if you have some feedback or questions - more visualizations will be added in future! Full Article Audio News
oni Auphonic Add-ons for Adobe Audition and Adobe Premiere By feedproxy.google.com Published On :: Wed, 14 Feb 2018 12:14:22 +0000 The new Auphonic Audio Post Production Add-ons for Adobe allows you to use the Auphonic Web Service directly within Adobe Audition and Adobe Premiere (Mac and Windows): Audition Multitrack Editor with the Auphonic Audio Post Production Add-on. The Auphonic Add-on can be embedded directly inside the Adobe user interface. It is possible to export tracks/projects from Audition/Premiere and process them with the Auphonic audio post production algorithms (loudness, leveling, noise reduction - see Audio Examples), use our Encoding/Tagging, Chapter Marks, Speech Recognition and trigger Publishing with one click. Furthermore, you can import the result file of an Auphonic Production into Audition/Premiere. Download the Auphonic Audio Post Production Add-ons for Adobe: Audition: https://exchange.adobe.com/addons/products/20433 Premiere: https://exchange.adobe.com/addons/products/20429 Auphonic Add-on for Adobe Audition Audition Waveform Editor with the Auphonic Audio Post Production Add-on. Metadata, Marker times and titles will be exported to Auphonic as well. Export from Audition to Auphonic You can upload the audio of your current active document (a Multitrack Session or a Single Audio File) to our Web Service. In case of a Multitrack Session, a mixdown will be computed automatically to create a Singletrack Production in our Web Service. Unfortunately, it is not possible to export the individual tracks in Audition, which could be used to create Multitrack Productions. Metadata and Markers All metadata (see tab Metadata in Audition) and markers (see tab Marker in Audition and the Waveform Editor Screenshot) will be exported to Auphonic as well. Marker times and titles are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files. Auphonic Presets You can optionally choose an Auphonic Preset to use previously stored settings for your production. Start Production and Upload & Edit Buttons Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually. Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production. Audio Compression Uncompressed Multitrack Sessions or audio files in Audition (WAV, AIFF, RAW, etc.) will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality. FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead. Import Auphonic Productions in Audition To import the result of an Auphonic Production into Audition, choose the corresponding production and click Import. The result file will be downloaded from the Auphonic servers and can be used within Audition. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen. Auphonic Add-on for Adobe Premiere Premiere Video Editor with the Auphonic Audio Post Production Add-on. The Auphonic Add-on can be embedded directly inside the Adobe Premiere user interface. Export from Premiere to Auphonic You can upload the audio of your current Active Sequence in Premiere to our Web Service. We will automatically create an audio-only mixdown of all enabled audio tracks in your current Active Sequence. Video/Image tracks are ignored: no video will be rendered or uploaded to Auphonic! If you want to export a specific audio track, please just mute the other tracks. Start Production and Upload & Edit Buttons Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually. Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production. Auphonic Presets You can optionally choose an Auphonic Preset to use previously stored settings for your production. Chapter Markers Chapter Markers in Premiere (not all the other marker types!) will be exported to Auphonic as well and are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files. Audio Compression The mixdown of your Active Sequence in Premiere will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality. FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead. Import Auphonic Productions in Premiere To import the result of an Auphonic Production into Premiere, choose the corresponding production and click Import. The result file will be downloaded from the Auphonic servers and can be used within Premiere. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen. Installation Install our Add-ons for Audition and Premiere directly on the Adobe Add-ons website: Auphonic Audio Post Production for Adobe Audition: https://exchange.adobe.com/addons/products/20433 Auphonic Audio Post Production for Adobe Premiere: https://exchange.adobe.com/addons/products/20429 The installation requires the Adobe Creative Cloud desktop application and might take a few minutes. Please also also try to restart Audition/Premiere if the installation does not work (on Windows it was once even necessary to restart the computer to trigger the installation). After the installation, you can start our Add-ons directly in Audition/Premiere: navigate to Window -> Extensions and click Auphonic Post Production. Enjoy Thanks a lot to Durin Gleaves and Charles Van Winkle from Adobe for their great support! Please let us know if you have any questions or feedback! Full Article Audio News
oni New Auphonic Privacy Policy and GDPR Compliance By feedproxy.google.com Published On :: Thu, 24 May 2018 09:26:55 +0000 The new General Data Protection Regulation (GDPR) of the European Union will be implemented on May 25th, 2018. We used this opportunity to rework many of our internal data processing structures, removed unnecessary trackers and apply this strict and transparent regulation also to all our customers worldwide. Image from pixapay.com. At Auphonic we store as few personal information as possible about your usage and production data. Here are a few human-readable excerpts from our privacy policy about which information we collect, how we process it, how long and where we store it - for more details please see our full Privacy Policy. Information that we collect Your email address when you create an account. Your files, content, configuration parameters and other information, including your photos, audio or video files, production settings, metadata and emails. Your tokens or authentication information if you choose to connect to any External services. Your subscription plan, credits purchases and production billing history associated with your account, where applicable. Your interactions with us, whether by email, on our blog or on our social media platforms. We do not process any special categories of data (also commonly referred to as “sensitive personal data”). How we use and process your Data To authenticate you when you log on to your account. To run your Productions, such that Auphonic can create new media files from your Content according to your instructions. To improve our audio processing algorithms. For this purpose, you agree that your Content may be viewed and/or listened to by an Auphonic employee or any person contracted by Auphonic to work on our audio processing algorithms. To connect your Auphonic account to an External service according to your instructions. To develop, improve and optimize the contents, screen layouts and features of our Services. To follow up on any question and request for assistance or information. When using our Service, you fully retain any rights that you have with regards to your Content, including copyright. How long we store your Information Your Productions and any associated audio or video files will be permanently deleted from our servers including all its metadata and possible data from external services after 21 days (7 days for video productions). We will, however, keep billing metadata associated with your Productions in an internal database (how many hours of audio you processed). Also, we might store selected audio and/or video files (or excerpts thereof) from your Content in an internal storage space for the purpose of improving our audio processing algorithms. Other information like Presets, connected External services, Account settings etc. will be stored until you delete them or when your account is deleted. Where we store your Data All data that we collect from you is stored on secure servers in the European Economic Area (in Germany). More Information and Contact For more information please read our full Privacy Policy. Please do not hesitate to contact us regarding any matter relating to our privacy policy and GDPR compliance! Full Article News
oni New Auphonic Transcript Editor and Improved Speech Recognition Services By feedproxy.google.com Published On :: Tue, 03 Jul 2018 05:35:25 +0000 Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable. Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor: Screenshot of our Transcript Editor with word confidence highlighting and the edit bar. Try out the Transcript Editor Examples yourself! The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing. The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services. They also return word confidence values, timestamps and some punctuation, which is exported to our output files. The Auphonic Transcript Editor With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing. Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible. Main features of our transcript editor: Edit the transcription directly in the HTML document. Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine). Listen to audio playback of specific words directly in the HTML editor. Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription. Export the edited transcript to HTML, PDF or WebVTT. Completely useable on all mobile devices and desktop browsers. Examples: Try Out the Transcript Editor Here are two examples of the new transcript editor, taken from our speech recognition audio examples page: 1. Singletrack Transcript Editor Example Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing. 2. Multitrack Transcript Editor Example A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing. As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!). Transcript Editing By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below). If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes. When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript. Word Confidence Highlighting Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy. Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing. Use the button Add/Remove Highlighting to disable/enable word confidence highlighting. NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services! Audio Playback The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor). This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file. If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor. Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days. If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback. Export Formats, Save/Share Transcript Editor Click on the button Export... to see all export and saving/sharing options: Save/Share Editor The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies). Export HTML / Export PDF / Export WebVTT Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee). Every export format is rendered directly in the browser, no server needed. Amazon Transcribe The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation. UPDATE 2019: Amazon Transcribe offers more languages now - please see Amazon Transcribe Features! Pricing The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h). More information is available at Amazon Transcribe Pricing. Custom Vocabulary (Keywords) Support Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals). The same feature is also available in the Google Cloud Speech API. Timestamps, Word Confidence, and Punctuation Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text. It also adds some punctuation, which is combined with our own punctuation and formatting automatically. The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages. However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services! Try it yourself: Connect your Auphonic account with Amazon Transcribe at our External Services Page. Speechmatics Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation. Many Languages Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages). It also has a Global English feature, which supports different English accents during transcription. Timestamps, Word Confidence, and Punctuation Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation. Pricing Speechmatics is the most expensive speech recognition service at Auphonic. Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks. They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic. More information is available at Speechmatics Pricing. Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic. Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet. Try it yourself: Connect your Auphonic account with Speechmatics at our External Services Page. What do you think? Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated. We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better? Please let us know! Full Article Audio News
oni Audio Manipulations and Dynamic Ad Insertion with the Auphonic API By feedproxy.google.com Published On :: Sun, 22 Jul 2018 15:43:48 +0000 We are pleased to announce a new Audio Inserts feature in the Auphonic API: audio inserts are separate audio files (like intros/outros), which will be inserted into your production at a defined offset. This blog post shows how one can use this feature for Dynamic Ad Insertion and discusses other Audio Manipulation Methods of the Auphonic API. API-only Feature For the general podcasting hobbyist, or even for someone producing a regular podcast, the features that are accessible via our web interface are more than sufficient. However, some of our users, like podcasting companies who integrate our services as part of their products, asked us for dynamic ad insertions. We teamed up with them to develop a way of making this work within the Auphonic API. We are pleased therefore to announce audio inserts - a new feature that has been made part of our API. This feature is not available through the web interface though, it requires the use of our API. Before we talk about audio inserts, let's talk about what you need to know about dynamic ad insertion! Dynamic Ad Insertion There are two ways of dealing with adverts within podcasts. In the first, adverts are recorded or edited into the podcast and are fixed, or baked in. The second method is to use dynamic insertion, whereby the adverts are not part of the podcast recording/file but can be inserted into the podcast afterwards, at any time. This second approach would allow you to run new ad campaigns across your entire catalog of shows. As a podcaster this allows you to potentially generate new revenue from your old content. As a hosting company, dynamic ad insertion allows you to choose up to date and relevant adverts across all the podcasts you host. You can make these adverts relevant by subject or location, for instance. Your users can define the time for the ads and their podcast episode, you are then in control of the adverts you insert. Audio Inserts in Auphonic Whichever approach to adverts you are taking, using audio inserts can help you. Audio inserts are separate audio files which will be inserted into your main single or multitrack production at your defined offset (in seconds). When a separate audio file is inserted as part of your production, it creates a gap in the podcast audio file, shifting the audio back by the length of the insert. Helpfully, chapters and other time-based information like transcriptions are also shifted back when an insert is used. The biggest advantage of this is that Auphonic will apply loudness normalization to the audio insert so, from an audio point of view, it matches the rest of the podcast. Although created with dynamic ad insertion in mind, this feature can be used for any type of audio inserts: adverts, music songs, individual parts of a recording, etc. In the case of baked-in adverts, you could upload your already processed advert audio as an insert, without having to edit it into your podcast recording using a separate audio editing application. Please note that audio inserts should already be edited and processed before using them in production. (This is usually the case with pre-recorded adverts anyway). The only algorithm that Auphonic applies to an audio insert is loudness normalization in order to match the loudness of the entire production. Auphonic does not add any other processing (i.e. no leveling, noise reduction etc). Audio Inserts Coding Example Here is a brief overview of how to use our API for audio inserts. Be warned, this section is coding heavy, so if this isn't your thing, feel free to move along to the next section! You can add audio insert files with a call to https://auphonic.com/api/production/{uuid}/multi_input_files.json, where uuid is the UUID of your production. Here is an example with two audio inserts from an https URL. The offset/position in the main audio file must be given in seconds: curl -X POST -H "Content-Type: application/json" https://auphonic.com/api/production/{uuid}/multi_input_files.json -u username:password -d '[ { "input_file": "https://mydomain.com/my_audio_insert_1.wav", "type": "insert", "offset": 20.5 }, { "input_file": "https://mydomain.com/my_audio_insert_2.wav", "type": "insert", "offset": 120.3 } ]' More details showing how to use audio inserts in our API can be seen here. Additional API Audio Manipulations In addition to audio inserts, using the Auphonic API offers a number of other audio manipulation options, which are not available via the web interface: Cut start/end of audio files: See Docs In Single-track productions, this feature allows the user to cut the start and/or the end of the uploaded audio file. Crucially, time-based information such as chapters etc. will be shifted accordingly. Fade In/Out time of audio files: See Docs This allows you to set the fade in/out time (in ms) at the start/end of output files. The default fade time is 100ms, but values can be set between 0ms and 5000ms. This feature is also available in our Auphonic Leveler Desktop App. Adding intro and outro: See Docs Automatically add intros and outros to your main audio input file, as it is also available in our web interface. Add multiple intros or outros: See Docs Using our API, you can also add multiple intros or outros to a production. These intros or outros are played in series. Overlapping intros/outros: See Docs This feature allows intros/outros to overlap either the main audio or the following/previous intros/outros. Conclusion If you haven't explored our API already, the new audio inserts feature allows for greater flexibility and also dynamic ad insertion. If you offer online services to podcasters, the Auphonic API would also then allow you to pass on Auphonic's audio processing algorithms to your customers. If this is of interest to you or you have any new feature suggestions that you feel could benefit your company, please get in touch. We are always happy to extend the functionality of our products! Full Article Audio News
oni Resumable File Uploads to Auphonic By feedproxy.google.com Published On :: Tue, 04 Sep 2018 09:39:53 +0000 Large file uploads in a web browser are problematic, even in 2018. If working with a poor network connection, uploads can fail and have to be retried from the start. At Auphonic, our users have to upload large audio and video files, or multiple media files when creating a multitrack production. To minimize any potential issues, we integrated various external services which are specialized for large file transfers, like FTP, SFTP, Dropbox, Google Drive, S3, etc. To further minimize issues, as of today we have also released resumable and chunked direct file uploads in the web browser to auphonic.com. If you are not interested in the technical details, please just go to the section Resumable Uploads in Auphonic below. The Problem with Large File Uploads in the Browser If using either mobile networks (which remain fragile) or unstable WiFi connections, file uploads are often interrupted and will fail. There are also many areas in the world where connections are quite poor, which makes uploading big files frustrating. After an interrupted file upload, the web browser must restart the whole upload from the start, which is a problem when it happens in the middle of a 4GB video file upload on a slow connection. Furthermore, the longer an upload takes, the more likely it is to have a network glitch interrupting the upload, which then has to be retried from the start. The Solution: Chunked, Resumable Uploads To avoid user frustration, we need to be able to detect network errors and potentially resume an upload without having to restart it from the beginning. To achieve this, we have to split a file upload in smaller chunks directly within the web browser, so that these chunks can then be sent to the server afterwards. If an upload fails or the user wants to pause, it is possible to resume it later and only send those chunks that have not already been uploaded. If there is a network interruption or change, the upload will be retried automatically. Companies like Dropbox, Google, Amazon AWS etc. all have their own protocols and API's for chunked uploads, but there are also some open source implementations available, which offer resumable uploads: resumable.js [link]: "A JavaScript library providing multiple simultaneous, stable and resumable uploads via the HTML5 File API" This solutions is a JavaScript library only and requires that the protocol is implemented on the server as well. tus.io [link]: "Open Protocol for Resumable File Uploads" Tus.io offers a simple, cheap and reusable stack for clients and servers (in many languages). They have a blog with further information about resumable uploads, see tus blog. plupload [link]: A JavaScript library, similar to resumable.js, which requires a separate server implementation. We chose to use resumable.js and developed our own server implementation. Resumable Uploads in Auphonic If you upload files to a singletrack or multitrack production, you will see the upload progress bar and a pause button, which is one way to pause and resume an upload: It is also possible to close the browser completely or shut down your computer during the upload, then edit the production and upload the file again later. This will just resume the file upload from the position where it was stopped before. (Previously uploaded chunks are saved for 24h on our servers, after that you have to start the whole upload again.) In case of a network problem or if you switch to a different connection, we will resume the upload automatically. This should solve many problems which were reported by some users in the past! You can of course also use any of our external services for stable incoming and outgoing file transfers! Do you still have Uploading Issues? We hope that uploads to Auphonic are much more reliable now, even on poor connections. If you still experience any problems, please let us know. We are very happy about any bug reports and will do our best to fix them! Full Article Development News
oni Auphonic Adaptive Leveler Customization (Beta Update) By feedproxy.google.com Published On :: Mon, 05 Nov 2018 11:42:22 +0000 In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters: In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content. The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied. To try out the new algorithms, please join our private beta program and let us know your feedback! Leveler Preset The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler: Default Leveler: Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure. Foreground Only Leveler: This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified. Fast Leveler: A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc. Amplify Everything: Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise. Leveler Dynamic Range Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control. However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments. The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB. We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed. Example Use Cases: Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical). It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.). Compressor Controls Micro-Dynamics Compression: The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed"). The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time. For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio. Possible values are: Auto: The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets. Soft: Uses less compression. Medium: Our default setting. Hard: More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences. Off: No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression. Separate Music/Speech Parameters Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts: For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed. Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics! Example Use Case: Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high). Dialog intelligibility improvements for films and TV, without effecting music and FX elements. Other Advanced Audio Algorithm Parameters We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms: For more details, please see the Advanced Audio Algorithms Documentation. Want to know more? If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School): Auphonic’s New Advanced Features, with Georg Holzmann – PES 108 Advanced Parameters Private Beta and Feedback At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs. Please let us know your case studies, if you need any other algorithm parameters or if you have any questions! Here are some private beta invitation codes: jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3 KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l We are happy to send further invitation codes to all interested users - please do not hesitate to contact us! If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters: Auphonic Algorithm Parameters Private Beta Activation Full Article Audio Development
oni David Polonia By feedproxy.google.com Published On :: Fri, 08 May 2020 08:52:38 +0000 Full Article Inspiration siteinspire Web
oni How To Design An Iconic Logo By www.noupe.com Published On :: Fri, 08 May 2020 05:46:01 PDT https://www.noupe.com/design/how-to-design-an-iconic-logo.html Full Article
oni California Study: Four Widely Used Neonicotinoid Pesticides Harm Bees By feedproxy.google.com Published On :: Thu, 02 Aug 2018 18:33:52 +0000 Center for Biological Diversity Press Release WASHINGTON – Four commonly used neonicotinoid pesticides can harm bees and other pollinators, according to a new analysis by California’s Department of Pesticide Regulation. The study found that current approved uses of the “neonics” … Continue reading → Full Article Endangered Species ET News Bee California EPA Neonicotinoid Pesticides save the bees
oni Transitioning from academic research to UX research By feedproxy.google.com Published On :: Mon, 7 Oct 2019 08:08:19 GMT Doing UX research in a university is very different to doing UX research in a business setting. If you're an academic making the leap, what are the main differences you need to keep in mind? Full Article
oni An Introduction To React With Ionic By feedproxy.google.com Published On :: Mon, 04 May 2020 10:30:00 +0000 The Ionic Framework is an open-source UI toolkit for building fast, high-quality applications using web technologies with integrations for popular frameworks like Angular and React. Ionic enables cross-platform development using either Cordova or Capacitor, with the latter featuring support for desktop application development using Electron. In this article, we will explore Ionic with the React integration by building an app that displays comics using the Marvel Comics API and allows users to create a collection of their favorites. Full Article
oni The $kappa$-Newtonian and $kappa$-Carrollian algebras and their noncommutative spacetimes. (arXiv:2003.03921v2 [hep-th] UPDATED) By arxiv.org Published On :: We derive the non-relativistic $c oinfty$ and ultra-relativistic $c o 0$ limits of the $kappa$-deformed symmetries and corresponding spacetime in (3+1) dimensions, with and without a cosmological constant. We apply the theory of Lie bialgebra contractions to the Poisson version of the $kappa$-(A)dS quantum algebra, and quantize the resulting contracted Poisson-Hopf algebras, thus giving rise to the $kappa$-deformation of the Newtonian (Newton-Hooke and Galilei) and Carrollian (Para-Poincar'e, Para-Euclidean and Carroll) quantum symmetries, including their deformed quadratic Casimir operators. The corresponding $kappa$-Newtonian and $kappa$-Carrollian noncommutative spacetimes are also obtained as the non-relativistic and ultra-relativistic limits of the $kappa$-(A)dS noncommutative spacetime. These constructions allow us to analyze the non-trivial interplay between the quantum deformation parameter $kappa$, the curvature parameter $eta$ and the speed of light parameter $c$. Full Article
oni A stochastic approach to the synchronization of coupled oscillators. (arXiv:2002.04472v2 [nlin.AO] UPDATED) By arxiv.org Published On :: This paper deals with an optimal control problem associated to the Kuramoto model describing the dynamical behavior of a network of coupled oscillators. Our aim is to design a suitable control function allowing us to steer the system to a synchronized configuration in which all the oscillators are aligned on the same phase. This control is computed via the minimization of a given cost functional associated with the dynamics considered. For this minimization, we propose a novel approach based on the combination of a standard Gradient Descent (GD) methodology with the recently-developed Random Batch Method (RBM) for the efficient numerical approximation of collective dynamics. Our simulations show that the employment of RBM improves the performances of the GD algorithm, reducing the computational complexity of the minimization process and allowing for a more efficient control calculation. Full Article