pe

MPs approve 'hybrid proceedings' in House of Commons amid coronavirus lockdown with some to appear via video link

MPs have approved hybrid proceedings in the House of Commons with some MPs set to attend via video link amid the coronavirus lockdown.




pe

Virtual Commons sitting is start of modernisation, says Speaker Lindsay Hoyle

The historic "hybrid" sitting of the House of Commons will be the springboard for further modernisation of Westminster, Speaker Sir Lindsay Hoyle told the Standard.




pe

Speaker Lindsay Hoyle: Whatever happens, democracy carries on

The Speaker is presiding over his first virtual PMQs. Susannah Butter meets him (on FaceTime)




pe

Former Chancellor Philip Hammond calls on Government to reopen economy soon or face disaster

But in one sign of a turning tide in Number 10, the UK Government is reportedly considering a proposal to allow Brits to meet up with small "bubbles" of up to 10 of their closest family or friends.




pe

Boris Johnson 'focused on securing more PPE' in first Cabinet after return to work

Securing more personal protective equipment was top of the agenda for the Prime Minister as he returned to work, his official spokesman said.




pe

Boris Johnson will not take part in PMQs after birth of son with Dominic Raab expected to face Sir Keir Starmer

Boris Johnson will not take part in Prime Minister's Questions today following the birth of his son.




pe

Priti Patel defends Boris Johnson after rapper Dave brands him 'racist' during Brit Awards performance

Priti Patel had defended Boris Johnson after he was branded a "racist" by rapper Dave on stage at the 2020 Brit Awards.




pe

Boris Johnson ally Conor Burns resigns as minister after suspension from Commons for attempting to intimidate member of public

One of Boris Johnson's closest allies quit as a minister today after being found to have breached the MPs' code of conduct by trying to "intimidate" a company chairman involved in a loan row with his father.




pe

Matt Hancock 'speechless' at Professor Neil Ferguson's 'extraordinary' breach of coronavirus lockdown rules

Matt Hancock has slammed Professor Neil Ferguson for his "extraordinary" breach of coronavirus lockdown rules, adding he was left "speechless" by his actions.




pe

Ezra Miller Appears to Choke Fan in Video

In a video that first surfaced on Reddit over the weekend, Ezra Miller appears to choke a female fan who approaches the artist on the street.




pe

5 Things We Learned from the Tiger King Special

Netflix has capitalized on the huge success of their docuseries Tiger King by releasing an “aftershow” special. Here are 5 things we learned.




pe

Watch Lady Gaga, Billie Eilish, Camila Cabello, Shawn Mendes and More Perform in the One World: Together at Home Concert

Celebrities from across the globe came together Saturday night to lift their fans’ spirits as the world continues to cope with the coronavirus pandemic.




pe

Gigi Hadid and Zayn Malik Reportedly Expecting a Baby

25-year-old supermodel Gigi Hadid is expecting her first child with One Direction's Zayn Malik, reports TMZ and Entertainment Tonight.




pe

Drake Drops Surprise Mixtape, Announces New Album

Drake dropped a surprise 14-track mixtape and announced his next studio album will be released this summer.




pe

Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak


In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.

One of the earliest approaches to machine learning was to construct artificial neural networks that resemble the structure of the human brain. In the last decade this approach has finally taken off. Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before. They can translate between languages with a proficiency approaching that of a human translator. They can produce photorealistic images of humans and animals. They can speak with the voices of people whom they have listened to for mere minutes. And they can learn fine, continuous control such as how to drive a car or use a robotic arm to connect Lego pieces.

WHAT IS HUMANITY?: First the computers came for the best players in Jeopardy!, chess, and Go. Now AI researchers themselves are worried computers will soon accomplish every task better and more cheaply than human workers.Wikimedia

But perhaps the most important sign of things to come is their ability to learn to play games. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond. Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, researchers at the AI company DeepMind created AlphaZero: a neural network-based system that learned to play chess from scratch. In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs. The very same algorithm also learned to play Go from scratch, and within eight hours far surpassed the abilities of any human. The world’s best Go players were shocked. As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong ... I would go as far as to say not a single human has touched the edge of the truth of Go.”

The question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes.

It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari 1970s games at levels far exceeding human ability. Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learnt and mastered these games directly from the score and raw pixels.

This burst of progress via deep learning is fuelling great optimism and pessimism about what may soon be possible. There are serious concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, and violating the norms of war. My book—The Precipice: Existential Risk and the Future of Humanity—is concerned with risks on the largest scale. Could developments in AI pose an existential risk to humanity?

The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with intelligence that surpasses our own. A 2016 survey of top AI researchers found that, on average, they thought there was a 50 percent chance that AI systems would be able to “accomplish every task better and more cheaply than human workers” by 2061. The expert community doesn’t think of artificial general intelligence (AGI) as an impossible dream, so much as something that is more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created.

Humanity is currently in control of its own fate. We can choose our future. The same is not true for chimpanzees, blackbirds, or any other of Earth’s species. Our unique position in the world is a direct result of our unique mental abilities. What would happen if sometime this century researchers created an AGI surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. On its own, this might not be too much cause for concern. For there are many ways we might hope to retain control. Unfortunately, the few researchers working on such plans are finding them far more difficult than anticipated. In fact it is they who are the leading voices of concern.

If their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

To see why they are concerned, it will be helpful to look at our current AI techniques and why these are hard to align or control. One of the leading paradigms for how we might eventually create AGI combines deep learning with an earlier idea called reinforcement learning. This involves agents that receive reward (or punishment) for performing various acts in various circumstances. With enough intelligence and experience, the agent becomes extremely capable at steering its environment into the states where it obtains high reward. The specification of which acts and states produce reward for the agent is known as its reward function. This can either be stipulated by its designers or learnt by the agent. Unfortunately, neither of these methods can be easily scaled up to encode human values in the agent’s reward function. Our values are too complex and subtle to specify by hand. And we are not yet close to being able to infer the full complexity of a human’s values from observing their behavior. Even if we could, humanity consists of many humans, with different values, changing values, and uncertainty about their values.

Any near-term attempt to align an AI agent with human values would produce only a flawed copy. In some circumstances this misalignment would be mostly harmless. But the more intelligent the AI systems, the more they can change the world, and the further apart things will come. When we reflect on the result, we see how such misaligned attempts at utopia can go terribly wrong: the shallowness of a Brave New World, or the disempowerment of With Folded Hands. And even these are sort of best-case scenarios. They assume the builders of the system are striving to align it to human values. But we should expect some developers to be more focused on building systems to achieve other goals, such as winning wars or maximizing profits, perhaps with very little focus on ethical constraints. These systems may be much more dangerous. In the existing paradigm, sufficiently intelligent agents would end up with instrumental goals to deceive and overpower us. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: Being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it.

Ultimately, the system would be motivated to wrest control of the future from humanity, as that would help achieve all these instrumental goals: acquiring massive resources, while avoiding being shut down or having its reward function altered. Since humans would predictably interfere with all these instrumental goals, it would be motivated to hide them from us until it was too late for us to be able to put up meaningful resistance. And if their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

How could an AI system seize control? There is a major misconception (driven by Hollywood and the media) that this requires robots. After all, how else would AI be able to act in the physical world? Without robots, the system can only produce words, pictures, and sounds. But a moment’s reflection shows that these are exactly what is needed to take control. For the most damaging people in history have not been the strongest. Hitler, Stalin, and Genghis Khan achieved their absolute control over large parts of the world by using words to convince millions of others to win the requisite physical contests. So long as an AI system can entice or coerce people to do its physical bidding, it wouldn’t need robots at all.

We can’t know exactly how a system might seize control. But it is useful to consider an illustrative pathway we can actually understand as a lower bound for what is possible.

First, the AI system could gain access to the Internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: Consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the Internet, forming a large “botnet,” a vast scaling-up of computational resources providing a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate. None of these steps involve anything mysterious—human hackers and criminals have already done all of these things using just the Internet.

Finally, the AI would need to escalate its power again. There are many plausible pathways: By taking over most of the world’s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity.

Of course, no current AI systems can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes. History already involves examples of entities with human-level intelligence acquiring a substantial fraction of all global power as an instrumental goal to achieving what they want. And we’ve seen humanity scaling up from a minor species with less than a million individuals to having decisive control over the future. So we should assume that this is possible for new entities whose intelligence vastly exceeds our own.

The case for existential risk from AI is clearly speculative. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

There is actually less disagreement here than first appears. Those who counsel caution agree that the timeframe to AGI is decades, not years, and typically suggest research on alignment, not government regulation. So the substantive disagreement is not really over whether AGI is possible or whether it plausibly could be a threat to humanity. It is over whether a potential existential threat that looks to be decades away should be of concern to us now. It seems to me that it should.

The best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers: 70 percent agreed with University of California, Berkeley professor Stuart Russell’s broad argument about why advanced AI with misaligned values might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the long-term impact of AGI being “extremely bad (e.g. human extinction)” was at least 5 percent.

I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a 1 in 20 chance the field’s ultimate goal would be extremely bad for humanity? There is a lot of uncertainty and disagreement, but it is not at all a fringe position that AGI will be developed within 50 years and that it could be an existential catastrophe.

Even though our current and foreseeable systems pose no threat to humanity at large, time is of the essence. In part this is because progress may come very suddenly: Through unpredictable research breakthroughs, or by rapid scaling-up of the first intelligent systems (for example, by rolling them out to thousands of times as much hardware, or allowing them to improve their own intelligence). And in part it is because such a momentous change in human affairs may require more than a couple of decades to adequately prepare for. In the words of Demis Hassabis, co-founder of DeepMind:

We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.

Toby Ord is a philosopher and research fellow at the Future of Humanity Institute, and the author of The Precipice: Existential Risk and the Future of Humanity.

From the book The Precipice by Toby Ord. Copyright © 2020 by Toby Ord. Reprinted by permission of Hachette Books, New York, NY. All rights reserved.

Lead Image: Titima Ongkantong / Shutterstock


Read More…




pe

Why People Feel Misinformed, Confused, and Terrified About the Pandemic - Facts So Romantic


 

The officials deciding what to open, and when, seldom offer thoughtful rationales. Clearly, risk communication about COVID-19 is failing with potentially dire consequences.Photograph by michael_swan / Flickr

When I worked as a TV reporter covering health and science, I would often be recognized in public places. For the most part, the interactions were brief hellos or compliments. Two periods of time stand out when significant numbers of those who approached me were seeking detailed information: the earliest days of the pandemic that became HIV/AIDS and during the anthrax attacks shortly following 9/11. Clearly people feared for their own safety and felt their usual sources of information were not offering them satisfaction. Citizens’ motivation to seek advice when they feel they aren’t getting it from official sources is a strong indication that risk communication is doing a substandard job. It’s significant that one occurred in the pre-Internet era and one after. We can’t blame a public feeling misinformed solely on the noise of the digital age.

America is now opening up from COVID-19 lockdown with different rules in different places. In many parts of the country, people have been demonstrating, even rioting, for restrictions to be lifted sooner. Others are terrified of loosening the restrictions because they see COVID-19 cases and deaths still rising daily. The officials deciding what to open, and when, seldom offer thoughtful rationales. Clearly, risk communication about COVID-19 is failing with potentially dire consequences.

A big part of maintaining credibility is to admit to uncertainty—something politicians are loath to do.

Peter Sandman is a foremost expert on risk communication. A former professor at Rutgers University, he was a top consultant with the Centers for Disease Control in designing crisis and emergency risk-communication, a field of study that combines public health with psychology. Sandman is known for the formula Risk = Hazard + Outrage. His goal is to create better communication about risk, allowing people to assess hazards and not get caught up in outrage at politicians, public health officials, or the media. Today, Sandman is a risk consultant, teamed with his wife, Jody Lanard, a pediatrician and psychiatrist. Lanard wrote the first draft of the World Health Organization’s Outbreak Communications Guidelines. “Jody and Peter are seen as the umpires to judge the gold standard of risk communications,” said Michael Osterholm of the Center for Infectious Disease Research and Policy at the University of Minnesota. Sandman and Lanard have posted a guide for effective COVID-19 communication on the center’s website.

I reached out to Sandman to expand on their advice. We communicated through email.

Sandman began by saying he understood the protests around the country about the lockdown. “It’s very hard to warn people to abide by social-distancing measures when they’re so outraged that they want to kill somebody and trust absolutely nothing people say,” he told me. “COVID-19 outrage taps into preexisting grievances and ideologies. It’s not just about COVID-19 policies. It’s about freedom, equality, too much or too little government. It’s about the arrogance of egghead experts, left versus right, globalism versus nationalism versus federalism. And it’s endlessly, pointlessly about Donald Trump.”

Since the crisis began, Sandman has isolated three categories of grievance. He spelled them out for me, assuming the voices of the outraged:

• “In parts of the country, the response to COVID-19 was delayed and weak; officials unwisely prioritized ‘allaying panic’ instead of allaying the spread of the virus; lockdown then became necessary, not because it was inevitable but because our leaders had screwed up; and now we’re very worried about coming out of lockdown prematurely or chaotically, mishandling the next phase of the pandemic as badly as we handled the first phase.”

• “In parts of the country, the response to COVID-19 was excessive—as if the big cities on the two coasts were the whole country and flyover America didn’t need or didn’t deserve a separate set of policies. There are countless rural counties with zero confirmed cases. Much of the U.S. public-health profession assumes and even asserts without building an evidence-based case that these places, too, needed to be locked down and now need to reopen carefully, cautiously, slowly, and not until they have lots of testing and contact-tracing capacity. How dare they destroy our economy (too) just because of their mishandled outbreak!”

• “Once again the powers-that-be have done more to protect other people’s health than to protect my health. And once again the powers-that-be have done more to protect other people’s economic welfare than to protect my economic welfare!” (These claims can be made with considerable truth by healthcare workers; essential workers in low-income, high-touch occupations; residents of nursing homes; African-Americans; renters who risk eviction; the retired whose savings are threatened; and others.)

In their article for the Center for Infectious Disease Research and Policy, Sandman and Lanard point out that coping with a pandemic requires a thorough plan of communication. This is particularly important as the crisis is likely to enter a second wave of infection, when it could be more devastating. The plan starts with six core principles: 1) Don’t over-reassure, 2) Proclaim uncertainty, 3) Validate emotions—your audience’s and your own, 4) Give people things to do, 5) Admit and apologize for errors, and 6) Share dilemmas. To achieve the first three core principles, officials must immediately share what they know, even if the information may be incomplete. If officials share good news, they must be careful not to make it too hopeful. Over-reassurance is one of the biggest dangers in crisis communication. Sandman and Lanard suggest officials say things like, “Even though the number of new confirmed cases went down yesterday, I don’t want to put too much faith in one day’s good news.” 

Sandman and Lanard say a big part of maintaining credibility is to admit to uncertainty—something politicians are loath to do. They caution against invoking “science” as a sole reason for action, as science in the midst of a crisis is “incremental, fallible, and still in its infancy.” Expressing empathy, provided it’s genuine, is important, Sandman and Lanard say. It makes the bearer more human and believable. A major tool of empathy is to acknowledge the public’s fear as well as your own. There is good reason to be terrified about this virus and its consequences on society. It’s not something to hide.

Sandman and Lanard say current grievances with politicians, health officials, and the media, about how the crisis has been portrayed, have indeed been contradictory. But that makes them no less valid. Denying the contradictions only amplifies divisions in the public and accelerates the outrage, possibly beyond control. They strongly emphasize one piece of advice. “Before we can share the dilemma of how best to manage any loosening of the lockdown, we must decisively—and apologetically—disabuse the public of the myth that, barring a miracle, the COVID-19 pandemic can possibly be nearing its end in the next few months.”

Robert Bazell is an adjunct professor of molecular, cellular, and developmental biology at Yale. For 38 years, he was chief science correspondent for NBC News.


Read More…




pe

EU Officials' Opinion Piece In Chinese Newspaper Censored On Coronavirus Origin

The version published in China Daily omitted a reference to the illness originating in China and spreading to the rest of the world. The piece was published in full on the authors' websites.




pe

Top U.S. General On COVID-19, Reorienting For Great Power Competition

Steve Inskeep talks to Gen. Mark Milley, chairman of the Joint Chiefs of Staff, about the coronavirus threat within the ranks of the military, and guarding against a power competition with China.




pe

Shanghai Disneyland Sells Out Of Tickets For Post-Shutdown Reopening

Visitors' health status will be checked on a smartphone app before they enter the park. Once inside, they will be required to wear face masks at all times unless they are eating.




pe

France Is Planning A Partial Reopening Of Schools

NPR's Mary Louise Kelly talks with Jean-Michel Blanquer, French minister of education, about how France is planning to reopen primary schools on May 11.




pe

V-E Day: Europe Celebrates A Subdued 75th Anniversary During COVID-19 Pandemic

"Today, 75 years later, we are forced to commemorate alone, but we are not alone!" Germany's President Frank-Walter Steinmeier says, celebrating international unity in the post-war era.




pe

French Education Minister Says School Reopenings Will Be Done 'Very Progressively'

France's minister of education, Jean-Michel Blanquer, talked with NPR about the gradual reopening of schools, which will be voluntary. Still, many parents and administrators are against the plan.




pe

Georgia businesses reopen and customers start returning, but only time will tell if it's the right decision

Exactly one week since Georgia Gov. Brian Kemp began reopening the state's economy, small businesses shared early success stories as customers welcomed their return. But at what cost? Business owners say only time will tell.





pe

Trump attacks Joe Scarborough, who tells him 'take a rest' and 'let Mike Pence actually run things' 

With the U.S. death toll from the coronavirus mounting, President Trump on Monday took aim at MSNBC's Joe Scarborough. The cable news host responded by telling Trump to let Vice President Mike Pence “run things for the next couple of weeks.”





pe

As states push ahead with reopening, CDC warns coronavirus cases and deaths are set to soar

The Centers for Disease Control and Prevention is quietly projecting a stark rise in the number of new cases of the virus and deaths from it over the next month.





pe

What needs to happen for schools to reopen?

Most schools in the country are closed for the rest of the school year. What steps need to be taken for them to be ready to welcome students back in the fall?





pe

Is it worth risking lives to speed up a coronavirus vaccine?

Thousands of people have volunteered to be exposed to coronavirus if it means a vaccine can be developed more quickly. Should we let them?





pe

Trump's pick for coronavirus inspector general faces questions about independence

The Trump administration’s nominee for inspector general overseeing billions in Treasury Department coronavirus relief funds is facing skepticism from Democrats who fear that he will not show sufficient independence.





pe

In a hurry to reopen state, Arizona governor disbands scientific panel that modeled outbreak

Arizona's Republican Gov. Doug Ducey's administration disbanded a panel of university scientists who had warned that reopening the state now would be dangerous.





pe

New coronavirus threat appears in children, risking heart damage

Five top pediatric heart, infectious disease or critical care specialists told Yahoo News they are tracking a serious new syndrome they believe is related to Kawasaki disease, affecting children infected with the coronavirus.





pe

Pence press secretary Katie Miller tests positive for coronavirus

An aide to Vice President Mike Pence has tested positive for the coronavirus, senior administration officials confirmed Friday.





pe

Golf courses aiming for 'touchless experience' as they begin to open across Canada

While many parts of our economy remain shuttered and other sports continue to wait for the go ahead to resume play, courses in all 10 provinces will soon be open for business.




pe

Coronavirus: Here's what happened in the sports world on Friday

Stay up to date on the latest on how the coronavirus outbreak is affecting sports around the globe.




pe

Superman on skates: The aura of Bobby Orr

When Rob Pizzo asked Scott Russell to help him out with another look at the goal Bobby Orr scored 50 years ago to win the Stanley Cup, it sparked something in Russell. It took him back to a childhood memory of the greatest goal he ever saw scored by a hero he has been connected to and worshipped most of his life.



  • Sports/Hockey/NHL

pe

CFL's 2020 season likely to be wiped out, commissioner Randy Ambrosie says

CFL commissioner Randy Ambrosie says the most likely scenario for the league is a cancelled 2020 season during the COVID-19 pandemic.



  • Sports/Football/CFL

pe

NBA commissioner says games without fans could happen next season: report

With major sports leagues preparing for the eventuality of restarting behind closed doors amid the coronavirus pandemic, NBA commissioner Adam Silver reportedly took the concept to the next level on Friday.



  • Sports/Basketball/NBA

pe

Alysha Newman eager to test jumping shape in virtual pole vault event

After watching three of the world's top male pole vaulters test their athleticism in a virtual backyard competition on Sunday, Canadian-record holder Alysha Newman wants to be part of a remote women's event she believes will be held in the coming weeks.



  • Sports/Olympics/Summer Sports/Track and Field

pe

Coronavirus: Here's what's happening in the sports world on Saturday

Stay up to date on the latest on how the coronavirus outbreak is affecting sports around the globe.




pe

Winnipeg woman focusing on body positivity after being targeted by hockey players' misogynistic slurs

A Winnipeg woman who was a subject of misogynistic comments in a private group chat involving NHL players said she will continue her campaign of encouraging body acceptance.



  • News/Canada/Manitoba

pe

Winnipeg-born Brendan Leipsic’s comments ‘unacceptable and offensive’: NHL

Winnipeg-born NHL player Brendan Leipsic is facing massive criticism after private messages degrading women were exposed online.




pe

Winnipeg-born NHL player Brendan Leipsic’s contract terminated by Washington Capitals

The Washington Capitals announced in a statement Friday morning that Brendan Leipsic has been placed on unconditional waivers for the purposes of terminating his contract.




pe

How Do Supermassive Black Holes Form? You Can Sketch Galaxies to Help Astronomers Find Out

Tracing out the shape of a galaxy may offer clues to the size of its supermassive black hole. And a new study shows citizen scientists are actually better at it than computer algorithms.




pe

Do Peer Reviewers Prefer Significant Results?

An experiment on peer reviewers at a psychology conference suggests a positive result premium, which could drive publication bias.




pe

Driverless Cars Still Have Blind Spots. How Can Experts Fix Them?

Visual challenges remain before autonomous cars are ready for the masses.




pe

How to Navigate a World Reopening During the COVID-19 Pandemic

As we try to reengage with a changed world, a slew of fresh obstacles will force us to adapt our old habits and create new ones.




pe

How Did Ancient People Keep Their Food From Rotting?

Archaeologists have discovered methods that kept food fresh long before refrigeration.





pe

Citizen@175: Ottawa celebrates Victory in Europe, but where to get a meal?

To mark our 175th anniversary year, we feature a different front page each week from past editions of the Ottawa Citizen.




pe

Ottawa country singer pens anthem of gratitude for frontline workers

Chris Labelle has a hard time getting through his latest song, Frontliners, without becoming emotional.  The Ottawa country singer wrote the tune — an unabashedly sentimental anthem of gratitude for front-line workers — during one of the sleepless nights leading up to the birth of his first child with wife Julie. Their baby boy, Grayson, […]




pe

Girl Manages to Steal the Entire Show at the Sheep Competition

A three-year old girl is an internet sensation for her expert sheep wrangling.