an

Hard News: Has Iran found an effective Covid-19 treatment?

For obvious reasons, there has been a lot of attention paid to work going into developing vaccines that could prevent Covid-19 infection, and drugs that could treat it. In particular, there has been some excitement about new animal trial data for remdesivir, a drug developed by Gilead Sciences. Gilead's share price rose nearly 10% on the day the trial data were announced.
It will be some time yet before the safety and efficacy of remdesvir is established, if ever (it's worth noting that it was tried, unsuccessfully, as a treatment for Ebola). And since I started work on this post…




an

Hard News: The last – and best – parts of the cannabis bill have arrived

Regular readers will know that I've been hanging out for the "market allocation" parts of the proposed Cannabis Legalisation and Control Bill, which will be the subject of a referendum this year.
While most media outlets ran inane stories last year on how many joints 14 grams added up to, it was clear to anyone who took the subject seriously that the questions of who would get to produce and sell cannabis and how licences would be awarded were vastly more important. And we've had to wait for answers to those.
Well, they're here. And it's very good news. From…




an

Hard News: ICYMI: Links and things I've been doing

Like most people, I've been staying at home, doing a bit in the garden, cooking a lot and managing occasional bouts of anxiety. I've also written more here than I have done for a while. At a time when every Friday night has me missing my mates, it's been nice to see you all again.
But in the midst of it all – and after everything else disappeared – I got a new gig. It's with my friends from Spark Lab, it's called The Pivot Reports and it's a series of live-streamed shows over the next six weeks talking to business owners…




an

Your Pet Loss Stories'My Sweet Jess. My Guardian Angel'

I got my sweet Jess for my birthday 14 years ago. Little did I know at the time that she would become my everything, the love of my life. She died 5 weeks




an

Your Pet Loss Diaries'Dea & Samantha'July 08, 2013

Hi my angel Day 283 : Damn Each day is worse than the day before. “Time does not heal anything, it just teaches us how to deal with the pain . . .”




an

Your Pet Loss Stories'Storm, My Handsome Gentleman'

We got Storm when he was 13 weeks old as a companion for our Border Collie, Shadow. We decided on a Labrador because they were the opposite of BC's. Storm




an

Your Pet Loss Stories'My Loving Spaniel'

Since I was little we had a lovely golden working Cocker Spaniel. She was a lovely little dog and everyone in my family loved her. I grew up with her,




an

Your Pet Tributes'Tank'

Yesterday I lost my best friend. Nine years ago you saved my life and brought me happiness every day since. Selfishly I wanted to keep you around forever




an

Your Pet Tributes'Jason Hassan'

Jason: It's been one month now since you have gone to Rainbow Bridge. Jason your love was unconditional. You have left my life, but you will never leave




an

Your Pet Loss Stories'I Can Smile A Little Now'

I lost my beloved cat Gemma, seven months ago and when I think about her, I can smile a little. I can now think of the happy times and not that dreadful




an

Real Life Rainbow Bridge Stories'My Little Girl Candi'

She passed away just not too long ago, 6-12-13. It was very hard for me to put her down. I had her for 21 years, going on 22 years. I know I feel her presence




an

Your Pet Tributes'Jason Hassan'

Hey Jason! Just checking to see how you are doing at Rainbow Bridge? Hope you are making friends and being a good boy! I missed you so much my baby,




an

Your Pet Loss Stories'Patches, My Angel'

13 years ago I found out that I had breast cancer. I asked my husband if I could get a dog for comfort. No he said. I started praying for comfort.




an

Your Pet Loss Diaries'Theresa, Zeus & Shimma'Another Year Without You Oct 2013 xxxxOct 27, 2013

Dear Shim Shim, I hate this month, it's so miserable and even more so cos you passed on Halloween eve. I remember it all so vividly, your loss and Zeusy's




an

My Pet Tributes'Khan-li'

Khan-li my darling boy you were and always will be the Apple of my Eye, The Love Of My Life My Heart and all that is wonderful. The memories I have of




an

Your Pet Tributes'Tank'

2 days ago I lost my best friend. Nine years ago you saved my life and stole my heart. You brought me happiness at my lowest point and gave me reason to




an

Your Pet Loss Diaries'Lisa & Diana'My Beautiful DianaNov 17, 2013

Hi my baby girl, How are you? Are you playing and having a good time? Are you staying close to Rufus? I hope you're happy and have all kinds of new friends




an

strip for April / 22 / 2020 - Like and Subscribe




an

ComicLab Podcast with Gale Galligan

EPISODE SUMMARY Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, the ComicLab guys talk shop with Gale Galligan, creator of the bestselling Babysitter's Club graphic novels. See all of Gale's latest at Galesaur.com. EPISODE NOTES Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, the ComicLab guys talk shop with Gale Galligan, creator of the bestselling Babysitter's Club graphic novels.




an

strip for April / 24 / 2020 - Yes and...




an

HANA – JONI MITCHELL

Hana steps out of a storm Into a stranger’s warm, but Hard-up kitchen. She sees what must be done So she takes off her coat Rolls up her sleeves And starts pitchin’ in. Hana has a special knack For getting people back on the right track ‘Cause she knows They all matter So she doesn’t […]




an

woodshedding in libraryland II

Been thinking about this blog and how a lot of the work I’ve been doing lately doesn’t always lend itself...




an

Ask A Librarian: VPNs?

  From a Vermont librarian: VPNs are really important and I’d like to remind our patrons about them, but it...




an

Ask A Librarian: Hard Drive Cleanup for Macs?

  I am looking for someone who can help me find and clear out excess data on one of my...




an

Ask A Librarian: What About Controlled Digital Lending?

From a friend: Please explain to me your enthusiasm for controlled digital lending. Please let me know what you think...




an

Ask a Librarian: Older person wanting to learn about tech

Subtitled: What’s the Yahoo! Internet Life for this generation? From a friend: A nice older lady asked for advice on...




an

Ask A Librarian: Graphic Novels for Boomers?

I was wondering if you might give my little women’s (boomers) some guidance as to a beginning graphic novel for...




an

2019 reading list and commentary

I started 132 books this year and finished 127. New this year: a twitter thread of everything I read in...




an

Ask A Librarian: What is the deal with “free” ebook sites?

It’s been an odd set of months. I got busy with Drop-In Time and then very un-busy. I’ve been keeping...




an

Stop-And-Think

I hold these truths to be self-evident: Any law whatsoever, no matter how desirable on balance, will impose some costs on someone somewhere. In any society with more than about 12 people, it is virtually certain that those costs will be borne unequally. If the costs are borne unequally, then the costs borne by various […]




an

Animation

You should watch this: Click here for full-size version. Click here to comment or read others’ comments.




an

Goofus, Gallant and the Law

I. Why do some people sign up to have their brains frozen for possible future resurrection, while others don’t? You might think it’s because the first group has more faith in future technology, but Scott Alexander has survey data to suggest otherwise. Active members of the forum lesswrong.com, many of whom had pre-paid for brain […]




an

PHP 7.2.30 Release Announcement - PHP: Hypertext Preprocessor

The PHP development team announces the immediate availability of PHP 7.2.30. This is a security release.All PHP 7.2 users are encouraged to upgrade to this version.For source downloads of PHP 7.2.30 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.




an

Platform.sh + Lando: local dev in perfect sync with the cloud - platform.sh

Platform.sh removes a major pain point for developers: having to invest time in managing servers, virtual machines, or containers. Instead, Platform.sh enables developers to focus 100% of their time on their code. Since the beginning, Platform.sh has provided instant cloning capability, so dev teams can work on perfect copies of their production sites in the cloud for every Git branch. Now, in partnership with Lando, we’re extending that capability to the desktop.




an

Switching phubb's HTTP client - Christian Weiske

phubb is a WebSub hub that notifies subscribers in realtime when your website is updated.

Up to this year, phubb sent HTTP requests (GET + POST) with file_get_contents() and a HTTP stream context - see my previous example.

But then I needed a 100% correct way of detecting a page's Hub URL, and copied the code from phinde, my blog search engine. With that I introduced a dependency to PEAR's good old HTTP_Request2 library and I decided to use that library for all requests.

Unfortunately, now the problems began: During development I got an error in about one of 10-20 requests on my machine and could not find the cause:

PHP Fatal error:  Uncaught HTTP_Request2_MessageException: Malformed response:  in HTTP/Request2/Adapter/Socket.php on line 1019

#0 HTTP/Request2/Adapter/Socket.php(1019): HTTP_Request2_Response->__construct('', true, Object(Net_URL2))
#1 HTTP/Request2/Adapter/Socket.php(136): HTTP_Request2_Adapter_Socket->readResponse()
#2 HTTP/Request2.php(946): HTTP_Request2_Adapter_Socket->sendRequest(Object(phubbHttpRequest))
#3 phubb/src/phubb/HttpRequest.php(22): HTTP_Request2->send()
#4 phubb/src/phubb/Task/Publish.php(283): phubbHttpRequest->send()
#5 phubb/src/phubb/Task/Publish.php(248): phubbTask_Publish->fetchTopic(Object(phubbModel_Topic))
#6 phubb/src/phubb/Task/Publish.php(77): phubbTask_Publish->checkTopicUpdate('http://push-tes...')
#7  in HTTP/Request2/Response.php on line 215

The socket adapter has this problem, and I did not want to try to debug that strange problem. (No idea if the cURL one has it; I do not want to rely on php-curl). Finding a new HTTP library was the only option.

New HTTP library

The PHP Framework Interop Group has several HTTP-related proposals; one of them PSR-18: HTTP Client. Now that we have a standardized way to send HTTP requests in 2020, I should use a library that implements it.

The psr-18 topic on Github listed some clients:

Symfony's HTTP client was among them, and it provides a mock client for unit tests! Unfortunately, it also introduces a million dependencies.

There were two others that looked ok-ish on first sight (diciotto and http-client-curl) but both of them had no mock client, and the latter was even curl only. Again nothing for me.

Then I found PHP-HTTP that promises a standard interface for HTTP clients in PHP, and it supports PSR-18! It even has a socket client that has nearly no dependencies, and a mock client for unit tests. I'll try that one for now.




an

PHP Internals News: Episode 50: The RFC Process - Derick Rethans

PHP Internals News: Episode 50: The RFC Process

In this episode of "PHP Internals News", Henrik Gemal (LinkedIn, Website) asks me about how PHP's RFC process works, and I try to answer all of his questions.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 50. Today I'm talking with Henrik come out after he reached out with a question. You might know that at the end of every podcast, I ask: if you have any questions, feel free to email me. And Henrik was the first person to actually do so within a year and a half's time. For the fun, I'm thinking that instead of I'm asking the questions, I'm letting Henrik ask the questions today, because he suggested that we should do a podcast about how the RFC process actually works. Henrik, would you please introduce yourself?

Henrik Gemal 0:52

Yeah, my name is Henrik Gemal. I live in Denmark. The CTO of dinner booking which does reservation systems for restaurants. I've been doing a PHP development for more than 10 years. But I'm not coding so much now. Now I'm managing a big team of PHP developers. And I also been involved in the the open source development of Mozilla Firefox.

Derick Rethans 1:19

So usually I prepare the questions, but in this case, Henrik has prepared the questions. So I'll hand over to him to get started with them. And I'll try to do my best to answer the questions.

Henrik Gemal 1:27

I heard a lot about these RFCs. And I was interested in the process of it. So I'm just starting right off here, who can actually do an RFC? Is it anybody on the internet?

Derick Rethans 1:38

Yeah, pretty much. In order to be able to do an RFC, what you would need is you need to have an idea. And then you need access to our wiki system to be able to actually start writing that, well not to write them, to publish it. The RFC process is open for everybody. In the last year and a half or so, some of the podcasts that I've done have been with people that have been contributing to PHP for a long time. But in other cases, it's people like yourself that have an idea, come up, work together with somebody to work on a patch, and then create an RFC out of that. And that's then goes through the whole process. And sometimes they get accepted, and sometimes they don't.

Henrik Gemal 2:16

How technical are the RFCs? Is it like coding? Or is it more like the idea in general?

Derick Rethans 2:23

The idea needs to be there, it needs to be thought out. It needs to have a good reason for why we want to add or change something in PHP. The motivation is almost as important as what the change or addition actually is about. Now, that doesn't always get us here at variable. In my opinion, but that is an important thing. Now with the idea we need to talk about what changes it has on the rest of the ecosystem, whether they are backward compatible breaks in there, how it effects extensions, or sometimes how it effects OPCache. Sometimes considerations have to be taken for that because it's, it's something quite important in the PHP ecosystem. And it is recommended that it comes with a patch, because it's often a lot easier to talk about an implementation than to talk about the idea. But that is not a necessity. There have been quite some RFCs where the idea was there. But it wasn't a patch right away yet. It is less likely that these RFCs will g

Truncated by Planet PHP, read more at the original (another 15224 bytes)




an

PHP Internals News: Episode 51: Object Ergonomics - Derick Rethans

PHP Internals News: Episode 51: Object Ergonomics

In this episode of "PHP Internals News" I talk with Larry Garfield (Twitter, Website, GitHub) about a blog post that he was written related to PHP's Object Ergonomics.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 51. Today I'm talking with Larry Garfield, not about an RFC for once, but about a blog post that he's written called Object Ergonomics. Larry, would you please introduce yourself?

Larry Garfield 0:38

Hello World. My name is Larry Garfield, also Crell, CRELL, on various social medias. I work at platform.sh in developer relations. We're a continuous deployment cloud hosting company. I've been writing PHP for 21 years and been a active gadfly and nudge for at least 15 of those.

Derick Rethans 1:01

In the last couple of months, we have seen quite a lot of smaller RFCs about all kinds of little features here and there, to do with making the object oriented model of PHP a little bit better. I reckon this is also the nudge behind you writing a slightly longer blog post titled "Improving PHP object ergonomics".

Larry Garfield 1:26

If by slightly longer you mean 14 pages? Yes.

Derick Rethans 1:29

Yes, exactly. Yeah, it took me a while to read through. What made you write this document?

Larry Garfield 1:34

As you said, there's been a lot of discussion around improving PHP's general user experience of working with objects in PHP. Where there's definitely room for improvement, no question. And I found a lot of these to be useful in their own right, but also very narrow and narrow in ways that solve the immediate problem but could get in the way of solving larger problems later on down the line. So I went into this with an attitude of: Okay, we can kind of piecemeal and attack certain parts of the problem space. Or we can take a step back and look at the big picture and say: Alright, here's all the pain points we have. What can we do that would solve not just this one pain point. But let us solve multiple pain points with a single change? Or these two changes together solve this other pain point as well. Or, you know, how can we do this in a way that is not going to interfere with later development that we've talked about. We know we want to do, but isn't been done yet. So how do we not paint ourselves into a corner by thinking too narrow?

Derick Rethans 2:41

It's a curious thing, because a more narrow RFC is likely easier to get accepted, because it doesn't pull in a whole set of other problems as well. But of course, as you say, if the whole idea hasn't been thought through, then some of these things might not actually end up being beneficial. Because it can be combined with some other things to directly address the problems that we're trying to solve, right?

Larry Garfield 3:07

Yeah, it comes down to what are the smallest changes we can make that taken together have the largest impact. That kind of broad picture thinking is something that is hard to do in PHP, just given the way it's structured. So I took a stab at that.

Derick Rethans 3:21

What are the main problems that we should address?

Larry Garf

Truncated by Planet PHP, read more at the original (another 29525 bytes)




an

Xdebug Update: April 2020 - Derick Rethans

Xdebug Update: April 2020

Another monthly update where I explain what happened with Xdebug development in this past month. These will be published on the first Tuesday after the 5th of each month. Patreon supporters will get it earlier, on the first of each month. You can become a patron to support my work on Xdebug. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In March, I worked on Xdebug for about 60 hours, on the following things:

Xdebug 2.9.5

The 2.9.5 release addresses a few bugs. One of them was a follow on from the issue where Xdebug would crash when another extension would run code in PHP's Request Init stage, but only on a second or later request in the same PHP process. As this is not something that's easy to catch with PHP's testing framework that Xdebug uses, this issue slipped through the cracks.

The release fixes another bug, where throwing an exception from within a destructor would crash Xdebug. The fix for this was merely making sure that PHP's internal state is still available:

- if (!(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {
+ if (EG(current_execute_data) && !(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {

Beyond these two crashes, the release also addressed an issue where Xdebug did not always correct catch where executable code could exist for code coverage analyses. Over the last decade, PHP has been getting more and more optimised, with more internal engine instructions. Unfortunately that sometimes means that these are not hooked into by Xdebug, to see whether there could be a line of code that would make use of these opcodes. As this is often very dependent on how developers lay out their code, these issues are often found by them. Luckily, these issues are trivially fixed, as long as I have access to just the file containing that code. I then analyse it with vld to see which opcode (PHP engine instruction) I have missed.

Xdebug 3 and Xdebug Cloud

Most of my time was spend on getting Xdebug Cloud to a state where I can invite select developers to alpha test it. This includes allowing for Xdebug to connect to Xdebug Cloud. There is currently a branch available, but it still lacks the addition of SSL encryption, which is a requirement for allowing safe transport of debug information.

The communications between an IDE and Xdebug through Xdebug Cloud is working, with a few things related to detecting disconnections more reliably still outstanding.

As Xdebug Cloud needs integration in debugging clients (such as PhpStorm, and other IDEs), I have been extending the dbgpProxy tool to act as intermediate link between existing IDEs and Xdebug Cloud without IDEs having to change anything. This work is still ongoing, and is not documented yet, but I hope to finish that in the next week. Once that and SSL support in the Xdebug to Xdebug Cloud communication has been finalized, I will reach out to subscribers of the Xdebug Cloud newsletter to see if anybody is interested in trying it out.

Podcast

The PHP Internals News continues its second season. Episodes in the last month included a discussion on PHP 8's JIT engine and increasing complexity,

Truncated by Planet PHP, read more at the original (another 720 bytes)





an

PHP Internals News: Episode 52: Floats and Locales - Derick Rethans

PHP Internals News: Episode 52: Floats and Locales

In this episode of "PHP Internals News" I talk with George Banyard (Website, Twitter, GitHub, GitLab) about an RFC that he has proposed together with Máté Kocsis (Twitter, GitHub, LinkedIn) to make PHP's float to string logic no longer use locales.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 52. Today I'm talking with George Banyard about an RFC that he's made together with Mate Kocsis. This RFC is titled locale independent floats to string. Hello, George, would you please introduce yourself?

George Banyard 0:39

Hello, I'm George Peter Banyard. I'm a student at Imperial College and I work on PHP in my free time.

Derick Rethans 0:47

All right, so we're talking about local independent floats. What is the problem here?

George Banyard 0:52

Currently when you do a float to string conversion, so all casting or displaying a float, the conversion will depend on like the current local. So instead of always using like the decimal dot separator. For example, if you have like a German or the French locale enabled, it will use like a comma to separate like the decimals.

Derick Rethans 1:14

Okay, I can understand that that could be a bit confusing. What are these locales exactly?

George Banyard 1:20

So locales, which are more or less C locales, which PHP exposes to user land is a way how to change a bunch of rules on how string and like stuff gets displayed on the C level. One of the issues with it is that like it's global. For example, if you use like a thread safe API, if you use the thread safe PHP version, then set_locale() is not thread safe, so we'll just like impact other threads where you're using it.

Derick Rethans 1:50

So a locale is a set of rules to format specific things with floating point numbers being one of them in which situations does the locale influence the display a floating point numbers in every situation in PHP or only in some?

George Banyard 2:06

Yes, it only impacts like certain aspects, which is quite surprising. So a string cast will affect it the strval() function, vardump(), and debug_zval_dump() will all affect the decimal locator and also printf() with the percentage lowercase F, but that's expected because it's locale aware compared to the capital F modifier.

Derick Rethans 2:32

But it doesn't, for example, have the same problem in the serialised function or say var_export().

George Banyard 2:37

Yeah, and json_encode() also doesn't do that. PDO has special code which handles also this so that like all the PDO drivers get like a constant treat like float string, because that could like impact on the databases.

Derick Rethans 2:53

How is it a problem that with some locales enabled and then uses a comma instead of the decimal point. How can this cause bugs and PHP applications?

Truncated by Planet PHP, read more at the original (another 17468 bytes)




an

Hedge Fund 'Asshole' Destroying Local News & Firing Reporters Wants Google & Facebook To Just Hand Him More Money

Have you heard of Heath Freeman? He's a thirty-something hedge fund boss, who runs "Alden Global Capital," which owns a company misleadingly called "Digital First Media." His business has been to buy up local newspapers around the country and basically cut everything down to the bone, and just milk the assets for whatever cash they still produce, minus all the important journalism stuff. He's been called "the hedge fund asshole", "the hedge fund vampire that bleeds newspapers dry", "a small worthless footnote", the "Gordon Gecko" of newspapers and a variety of other fun things.

Reading through some of those links above, you find a standard playbook for Freeman's managing of newspapers:

These are the assholes who a few years ago bought the Denver Post, once one of the best regional newspapers in the country, and hollowed it out into a shell of its former self, then laid off some more people. Things got so bad that the Post’s own editorial board rebelled, demanding that if “Alden isn’t willing to do good journalism here, it should sell the Post to owners who will.”

And here's one of the other links from above telling a similar story:

The Denver newsroom was hardly alone in its misery. In Northern California, a combined editorial staff of 16 regional newspapers had reportedly been slashed from 1,000 to a mere 150. Farther down the coast in Orange County, there were according to industry analyst Ken Doctor, complained of rats, mildew, fallen ceilings, and filthy bathrooms. In her Washington Post column, media critic Margaret Sullivan called Alden “one of the most ruthless of the corporate strip-miners seemingly intent on destroying local journalism.”

And, yes, I think it's fair to say that many newspapers did get a bit fat and happy with their old school monopolistic hold on the news market pre-internet. And many of them failed to adapt. And so, restructuring and re-prioritizing is not a bad idea. But that's not really what's happening here. Alden appears to be taking profitable (not just struggling) newspapers, and squeezing as much money out of them directly into Freeman's pockets, rather than plowing it back into actual journalism. And Alden/DFM appears to be ridiculously profitable for Freeman, even as the journalism it produces becomes weaker and weaker. Jim Brady called it "combover journalism." Basically using skeleton staff to pretend to really be covering the news, when it's clear to everyone that it's not really doing the job.

All of that is prelude to the latest news that Freeman, who basically refuses to ever talk to the media, has sent a letter to other newspaper bosses suggesting they collude to force Google and Facebook to make him even richer.

You can see the full letter here:


Let's go through this nonsense bit by bit, because it is almost 100% nonsense.

These are immensely challenging times for all of us in the newspaper industry as we balance the two equally important goals of keeping the communities we serve fully informed, while also striving to safeguard the viability of our news organizations today and well into the future.

Let's be clear: the "viability" of your newsrooms was decimated when you fired a huge percentage of the local reporters and stuffed the profits into your pockets, rather than investing in the actual product.

Since Facebook was founded in 2004, nearly 2,000 (one in five) newspapers have closed and with them many thousands of newspaper jobs have been lost. In that same time period, Google has become the world's primary news aggregation service, Apple launched a news app with a subsription-based tier and Twitter has become a household name by serving as a distribution service for the content our staffs create.

Correlation is not causation, of course. But even if that were the case, the focus of a well-managed business would be to adapt to the changing market place to take advantage of, say, new distribution channels, new advertising and subscription products, and new ways of building a loyal community around your product. You know, the things that Google, Facebook and Twitter did... which your newspaper didn't do, perhaps because you fired a huge percentage of their staff and re-directed the money flow away from product and into your pocket.

Recent developments internationally, which will finally require online platforms to compensate the news industry are encouraging. I hope we can collaborate to move this issue forward in the United States in a fair and productive way. Just this month, April 2020, French antitrust regulators ordered Google to pay news publishers for displaying snippets of articles after years of helping itself to excerpts for its news service. As regulators in France said, "Google's practices caused a serious and immediate harm to the press sector, while the economic situation of publishers and news agencies is otherwise fragile." The Australian government also recently said that Facebook and Google would have to pay media outlets in the country for news content. The country's Treasurer, Josh Frydenberg noted "We can't deny the importance of creating a level playing field, ensuring a fair go for companies and the appropriate compensation for content."

We have, of course, written about both the plans in France as well as those in Australia (not to mention a similar push in Canada that Freeman apparently missed). Of course, what he's missing is... well, nearly everything. First, the idea that it's Google that's causing problems for the news industry is laughable on multiple fronts.

If newspapers feel that Google is causing them harm by linking to them and sending them traffic, then they can easily block Google, which respects robots.txt restrictions. I don't see Freeman's newspaper doing that. Second, in most of the world, Google does not monetize its Google News aggregation service, so the idea that it's someone making money off of "their" news, is not supported by reality. Third, the idea that "the news" is "owned" by the news organizations is not just laughable, but silly. After all, the news orgs are not making the news. If Freeman is going to claim that news orgs should be compensated for "their" news, then, uh, shouldn't his news orgs be paying the actual people who make the news that they're reporting on? Or is he saying that journalism is somehow special?

Finally, and most importantly, he says all of this as if we haven't seen how these efforts play out in practice. When Germany passed a similar law, Google ended up removing snippets only to be told they had to pay anyway. Google, correctly, said that if it had to license snippets, it would offer a price of $0, or it would stop linking to the sites -- and the news orgs agreed. In Spain, where Google was told it couldn't do this, the company shut down Google News and tons of smaller publications were harmed, not helped, but this policy.

This surely sounds familiar to all of us. It's been more than a decade since Rupert Murdoch instinctively observerd: "There are those who think they have a right to take our news content and use it for their own purposes without contributing a penny to its production... Their almost wholesale misappropriation of our stories is not fair use. To be impolite, it's theft."

First off, it's not theft. As we pointed out at the time, Rupert Murdoch, himself, at the very time he was making these claims, owned a whole bunch of news aggregators himself. The problem was never news aggregators. The problem has always been that other companies are successful on the internet and Rupert Murdoch was not. And, again, the whole "misappropriation" thing is nonsense: any news site is free to block Google's scrapers and if it's "misappropriation" to send you traffic, why do all of these news organizations employ "search engine optimizers" who work to get their sites higher in the rankings? And, yet again, are they paying the people who make the actual news? If not, then it seems like they're full of shit.

With Facebook and Google recently showing some contrition by launching token programs that provide a modest amount of funding, it's heartening to see that the tech giants are beginning to understand their moral and social responsibility to support and safeguard local journalism.

Spare me the "moral and social responsibility to support and safeguard local journalism," Heath. You're the one who cut 1,000 journalism jobs down to 150. Not Google. You're the one who took profitable newspapers that were investing in local journalism, fired a huge number of their reporters and staff, and redirected the even larger profits into your pockets instead of local journalism.

Even if someone wants to argue this fallacy, it should not be you, Heath.

Facebook created the Facebook Journalism Project in 2017 "to forge stronger ties with the news industry and work with journalists and publishers." If Facebook and the other tech behemoths are serious about wanting to "forge stronger ties with the news industry," that will start with properly remunerating the original producers of content.

Remunerating the "original producers"? So that means that Heath is now agreeing to compensate the people who create the news that his remaining reporters write up? Oh, no? He just means himself -- the middleman -- being remunerated directly into his pocket while he continues to cut jobs from his newsroom while raking in record profits? That seems... less compelling.

Facebook, Google, Twitter, Apple News and other online aggregators make billions of dollars annually from original, compelling content that our reporters, photographers and editors create day after day, hour after hour. We all know the numbers, and this one underscores the value of our intellectual property: The New York Times reported that in 2018, Google alone conservatively made $4.7 billion from the work of news publishers. Clearly, content-usage fees are an appropriate and reasonable way to help ensure newspapers exist to provide communities across the country with robust high-quality local journalism.

First of all, the $4.7 billion is likely nonsense, but even if it were accurate, Google is making that money by sending all those news sites a shit ton of traffic. Why aren't they doing anything reasonable to monetize it? And, of course, Digital First Media has bragged about its profitability, and leaked documents suggest its news business brought in close to a billion dollars in 2017 with a 17% operating margin, significantly higher than all other large newspaper chains.

This is nothing more than "Google has money, we want more money, Google needs to give us the money." There is no "clearly" here and "usage fees" are nonsense. If you don't want Google's traffic, put up robots.txt. Google will survive, but your papers might not.

One model to consider is how broadcast television stations, which provide valuable local news, successfully secured sizable retransmission fees for their programming from cable companies, satellite providers and telcos.

There are certain problems with retransmission fees in the first place (given that broadcast television was, by law, freely transmitted over the air in exchange for control over large swaths of spectrum), and the value they got was in having a large audience to advertise too. But, more importantly, retransmission involved taking an entire broadcast channel and piping it through cable and satellite to make things easier for TV watchers who didn't want to switch between an antenna and a cable (or satellite receiver). An aggregator is not -- contrary to what one might think reading Freeman's nonsense -- retransmitting anything. It's linking to your content and sending you traffic on your own site. The only things it shows are a headline and (sometimes) a snippet to attract more traffic.

There are certainly other potential options worth of our consideration -- among them whether to ask Congress about revisiting thoughtful limitations on "Fair Use" of copyrighted material, or seeking judicial review of how our trusted content is misused by others for their profit. By beginning a collective dialogue on these topics we can bring clarity around the best ways to proceed as an industry.

Ah, yes, let's throw fair use -- the very thing that news orgs regularly rely on to not get sued into the ground -- out the window in an effort to get Google to funnel extra money into Heath Freeman's pockets. That sounds smart. Or the other thing. Not smart.

And "a collective dialogue" in this sense appears to be collusion. As in an antitrust violation. Someone should have maybe mentioned that to Freeman.

Our newspaper brands and operations are the engines that power trust local news in communities across the United States.

Note that it's the brands and operations -- not journalists -- that he mentions here. That's a tell.

Fees from those who use and profit from our content can help continually optimize our product as well as ensure our newsrooms have the resources they need.

Again, Digital First Media, is perhaps the most profitable newspaper chain around. And it just keeps laying off reporters.

My hope is that we are able to work together towards the shared goal of protecting and enhancing local journalism.

You first, Heath, you first.

So, basically, Heath Freeman, who has spent decade or so buying up profitable newspapers, laying off a huge percentage of their newsrooms, leaving a shell of a husk in their place, then redirecting the continued profits (often that exist solely because of the legacy brand) into his own pockets rather than in journalism... wants the other newspapers to collude with him to force successful internet companies who send their newspapers a ton of free traffic to pay him money for the privilege of sending them traffic.

Sounds credible.




an

Suspected DNC & German Parliament Hacker Used His Name As His Email Password

You may have seen the news reports this week that German prosecutors have issued an arrest warrant for Dmitry Badin for a massive hack of the German Parliament that made headlines in 2016. The reports about the German arrest warrant all mention that German authorities "believe" that Badin is connected to the Russian GRU and its APT28 hacking group.

The folks over at Bellingcat have done their open source intelligence investigation thing, and provided a ton of evidence to show that Badin almost certainly is part of GRU... including the fact that he registered his 2018 car purchase to the public address of a GRU building. This is not the first time this has happened. A few years back, Bellingcat also connected a bunch of people to the GRU -- including some accused of hacking by the Dutch government -- based on leaked car registration info.

There's much, much more in the Bellingcat report, but the final paragraph really stands out. Bellingcat also found Badin -- again, a hacker who is suspected in multiple massive and consequential hacks, including of email accounts -- didn't seem to be all that careful with his own security:

The most surreal absence of “practice-what-you-breach” among GRU hackers might be visible in their lackadaisical attitude to their own cyber protection. In 2018, a large collection of hacked Russian mail accounts, including user name and passwords, was dumped online. Dmitry Badin’s email — which we figured out from his Skype account, which we in turn obtained from his phone number, which we of course got from his car registration — had been hacked. He had apparently been using the password Badin1990. After this, his email credentials were leaked again as part of a larger hack, where we see that he had changed his password from Badin1990 to the much more secure Badin990.

Yes, the password for at least one of his email accounts... was apparently his own last name and the year he was born. The cobbler's kids go shoeless again.




an

Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.




an

Senator Wyden And Others Introduce Bill Calling The DOJ's Bluff Regarding Its Attempt To Destroy Section 230 & Encryption

One of the key points we've been making concerning Attorney General William Barr and his DOJ's eager support for the terrible EARN-IT Act, is that much of it really seems to be to cover up the DOJ's own failings in fighting child porn and child exploitation. The premise behind the EARN IT Act is that there's a lot of child exploitation/child abuse material found on social media... and that social media companies should do more to block that content. Of course, if you step back and think about it, you'd quickly realize that this is a form of sweeping the problem under the rug. Rather than actually tracking down and arresting those exploiting and abusing children, it's demanding private companies just hide the evidence of those horrific acts.

And why might the DOJ and others be so supportive of sweeping evidence under the rug and hiding it? Perhaps because the DOJ and Congress have literally failed to live up to their mandates under existing laws to actually fight child exploitation. Barr's DOJ has been required under law to produce reports showing data about internet crimes against children, and come up with goals to fight those crimes. It has produced only two out of the six reports that were mandated over a decade ago. At the same time, Congress has only allocated a very small budget to state and local law enforcement for fighting internet child abuse. While the laws Congress passed say that Congress should give $60 million to local law enforcement, it has actually allocated only about half of that. Oh, and Homeland Security took nearly half of its "cybercrimes" budget and diverted it to immigration enforcement, rather than fighting internet crimes such as child exploitation.

So... maybe we should recognize that the problem isn't social media platforms, but the fact that Congress and law enforcement -- from local and state up to the DOJ -- have literally failed to do their job.

At least some elected officials have decided to call the DOJ's bluff on why we need the EARN IT Act. Led by Senator Ron Wyden (of course), Senators Kirsten Gillbrand, Bob Casey, Sherrod Brown and Rep. Anna Eshoo have introduced a new bill to actually fight child sex abuse online. Called the Invest in Child Safety Act, it would basically make law enforcement do its job regarding this stuff.

The Invest in Child Safety Act would direct $5 billion in mandatory funding to investigate and target the pedophiles and abusers who create and share child sexual abuse material online. And it would create a new White House office to coordinate efforts across federal agencies, after DOJ refused to comply with a 2008 law requiring coordination and reporting of those efforts. It also directs substantial new funding for community-based efforts to prevent children from becoming victims in the first place.

Basically, the bill would do a bunch of things to make sure that law enforcement is actually dealing with the very real problem of child exploitation, rather than demanding that internet companies (1) sweep evidence under the rug, and (2) break encryption:

  • Quadruple the number of prosecutors and agents in DOJ’s Child Exploitation and Obscenity Section from 30 FTEs to 120 FTEs;
  • Add 100 new agents and investigators for the Federal Bureau of Investigation’s Innocent Images National Initiative, Crimes Against Children Unit, Child Abduction Rapid Deployment Teams, and Child Exploitation and Human Trafficking Task Forces;
  • Fund 65 new NCMEC analysts, engineers, and mental health counselors, as well as a major upgrade to NCMEC’s technology platform to enable the organization to more effectively evaluate and process CSAM reports from tech companies;
  • Double funding for the state Internet Crimes Against Children (ICAC) Task Forces;
  • Double funding for the National Criminal Justice Training Center, to administer crucial Internet Crimes Against Children and Missing and Exploited Children training programs;
  • Increase funding for evidence-based programs, local governments and non-federal entities to detect, prevent and support victims of child sexual abuse, including school-based mental health services and prevention programs like the Children’s Advocacy Centers and the HHS’ Street Outreach Program;
  • Require tech companies to increase the time that they hold evidence of CSAM, in a secure database, to enable law enforcement agencies to prosecute older cases;
  • Establish an Office to Enforce and Protect Against Child Sexual Exploitation, within the Executive Office of the President, to direct and streamline the federal government’s efforts to prevent, investigate and prosecute the scourge of child exploitation;
  • Require the Office to develop an enforcement and protection strategy, in coordination with HHS and GAO; and
  • Require the Office to submit annual monitoring reports, subject to mandatory Congressional testimony to ensure timely execution.
While I always have concerns about law enforcement mission creep and misguided targeting of law enforcement efforts, hopefully everyone can agree that child exploitation does remain a very real problem, and one that law enforcement should be investigating and going after those who are actually exploiting and abusing children. This bill would make that possible, rather than the alternative approach of just blaming the internet companies for law enforcement's failure to take any of this seriously.




an

Fans Port Mario 64 To PC And Make It Way Better, So Of Course Nintendo Is Trying To Nuke The Project

I'm lucky enough to own a decades old Nintendo 64 and a handful of games, including the classic Mario 64. My kids love that game. Still, the first thing they asked when I showed it to them the first time is why the screen was letterboxed, why the characters looked like they were made of lego blocks, and why I needed weird cords to plug it all into the flat screen television. The answer to these spoiled monsters' questions, of course, is that the game is super old and wasn't meant to be played on modern televisions. It's the story of a lot of older games, though many PC games at least have a healthy modding community that will take classics and get them working on present day hardware. Consoles don't have that luxury.

Well, usually, that is. It turns out that enough folks were interested in modernizing Mario 64 that a group of fans managed to pull off porting it to PC. And, because this is a port and not emulation, they managed to update it to run in 4k graphics and added a ton of modern visual effects.

Last year, Super Mario 64's N64 code was reverse-engineered by fans, allowing for all kinds of new and exciting things to be done with Nintendo’s 1996 classic. Like building a completely new PC port of the game, which can run in 4K and ultra-wide resolutions.

This is a very new and cool thing! Previously, if you were playing Super Mario 64 on PC, you were playing via emulation, as your PC ran code pretending to be an N64. This game is made specifically for the PC, built from the ground up, meaning it not only runs like a dream, but even supports mod stuff like ReShade, allowing for graphical tweaks (like the distance blur seen here).

As you'll see, the video the Kotaku post is referencing can't be embedded here because Nintendo already took it down. Instead, I'll use another video that hasn't been taken down at the time of this writing, so you can see just how great this looks.

In addition to videos of the project, Nintendo has also been busy firing off legal salvos to get download links for the PC port of the game taken down from wherever it can find them. Now, while Nintendo's reputation for IP protectionism is such that it would almost certainly take this fan project down under virtually any circumstances, it is also worth noting that the company has a planned re-release of Mario 64 for its latest Nintendo console. That likely only supercharged the speed with which it is trying to disappear this labor of love from fans of an antiquated game that have since moved on to gaming on their PCs.

But why should the company do this? Nintendo consoles are known for many things, including user-friendly gaming and colorful games geared generally towards younger audiences. You know, exactly not the people who would take it on themselves to get an old Mario game working on their PC instead of a Nintendo console. What threat does this PC port from fans represent to Nintendo revenue? It's hard to imagine that threat is anything substantial.

And, yet, here we are anyway. Nintendo, after all, doesn't seem to be able to help itself.




an

Amazon Sued For Saying You've 'Bought' Movies That It Can Take Away From You

For well over a decade we've talked about the many problems that arise when copyright is compared to "property" -- and people try to simply move over concepts from physical, tangible property into the world of digital. A key aspect of this: when you "purchase" something digital online, is it really a "purchase" or is it a "license" (especially a license that could be revoked)? If it was a true "purchase" then you should own it and the seller shouldn't be able to take it back. But in practice, over and over and over again, we've seen stories of people having things they supposedly "bought" disappear. The situation is so crazy that we've referred to it as Schrödinger's Download, in that many copyright holders and retailers would like the very same thing to be a "sale" some of the time, and a "license" some of the time (the "times" for each tend to be when it hurts the consumers the most). This has, at times, seeped into physical goods, where they've tried to add "license agreements" to physical products. Or, worse, when some copyright folks claimed that buying a DVD means you don't actually own what you bought, but rather are merely "purchasing access" to the content, and that could be revoked.

Anyway, I'm amazed that we don't see more lawsuits about this kind of thing -- but one was recently filed in California. Someone named Amanda Caudel is suing Amazon for saying that you've "purchased" a video download, which Amazon might disappear from your library whenever it wants. As the lawsuit makes clear, Amazon directly says that you are buying the movie (as opposed to renting it). From the lawsuit filing itself:

And, they point out, in your account there's a listing of "Your Video Purchases & Rentals." But, the lawsuit claims, what you purchase doesn't seem to behave like a real purchase:

Reasonable consumers will expect that the use of a “Buy” button and the representation that their Video Content is a “Purchase” means that the consumer has paid for full access to the Video Content and, like any bought product, that access cannot be revoked.

Unfortunately for consumers who chose the “Buy” option, this is deceptive and untrue. Rather, the ugly truth is that Defendant secretly reserves the right to terminate the consumers’ access and use of the Video Content at any time, and has done so on numerous occasions, leaving the consumer without the ability to enjoy their already-bought Video Content.

Defendant’s representations are misleading because they give the impression that the Video Content is purchased – i.e. the person owns it - when in fact that is not true because Defendant or others may revoke access to the Video Content at any time and for any reason.

In so representing the “Purchase” of Video Content as true ownership of the content, Defendant took advantage of the (1) cognitive shortcuts made at the point-of-sale, e.g. Rent v. Buy and (2) price of the Video Content, which is akin to an outright purchase versus a rental.

Though some consumers may get lucky and never lose access to any of their paid-for media, others may one day find that their Video Content is now completely inaccessible. Regardless, all consumers have overpaid for the Video Content because they are not in fact owners of the Video Content, despite have paid extra money to “Buy” the product.

The plaintiff (or rather, her lawyers) are trying to make this a class action lawsuit, and are arguing that (among other things) this is false advertising. I am, not surprisingly, sympathetic to the plaintiff -- and remain disappointed at how copyright and similar restrictions are being used to chip away at ownership and actual property rights. That said... I'm not that optimistic the case will get very far. In the past, companies have been able to wiggle out of similar claims, and I'm pretty sure that Amazon tries to push disputes like this to binding arbitration, meaning that the lawsuit may be dead on arrival.

Still, it's yet another reminder of how copyright is chipping away at real property.




an

Daily Deal: LingvaNex Translator

Lingvanex Translator was created with the mission to enable people to read, write, and speak different languages anywhere in the world. It can translate text, voice, images, websites, and documents. It works on a wide range of platforms including iOS, Android, macOS, Windows, and more so you can start translating media in more than 112 languages. It's on sale for $80.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.




an

No, Congress Can't Fix The Broken US Broadband Market In A Mad Dash During A Pandemic

COVID-19 has shone a very bright light on the importance of widely available, affordable broadband. Nearly 42 million Americans lack access to any broadband whatsoever--double FCC estimates. And millions more can't afford service thanks to a lack of competition among very powerful, government pampered telecom monopolies.

As usual, with political pressure mounting to "do something," DC's solution is going to be to throw more money at the problem:

"The plan unveiled Thursday would inject $80 billion over five years into expansion of broadband infrastructure into neglected rural, suburban and urban areas, with an emphasis on communities with high levels of poverty. It includes measures to promote rapid building of internet systems, such as low-interest financing for infrastructure projects."

To be clear, subsidies often do help shore up broadband availability at coverage. The problem is that the United States government, largely captured by telecom giants with a vested interest in protecting regional monopolies, utterly sucks at it.

Despite ample pretense to the contrary, nobody in the US government actually knows where broadband is currently available. Data supplied by ISPs has never been rigorously fact-checked by a government fearful of upsetting deep-pocketed campaign contributors (and valued NSA partners). As a result, our very expensive ($350 million at last count) FCC broadband coverage map creates a picture of availability and speed that's complete fantasy. It's theater designed to disguise the fact that US broadband is mediocre on every broadband metric that matters. Especially cost.

While there has been some effort to fix the mapping problem via recent legislation, the FCC still needs several years (and more money) to do so. And while you'd think this would be more obvious, you can't fix a problem you can't even effectively measure. There's also not much indication that the $80 billion, while potentially well intentioned, would actually get where it needs to go. Especially right now, when federal oversight is effectively nonexistent.

You may or may not have noticed this, but US telecom is a corrupt, monopolized mess. Giants like AT&T and Comcast all but own state and federal legislatures and, in many instances, literally write the law. Feckless regulators bend over backward to avoid upsetting deep-pocketed campaign contributors. So when subsidies are doled out, they very often don't end up where regulators and lawmakers intended. There's an endless ocean of examples where these giants took billions in taxpayer subsidies to deploy fiber networks that are never fully delivered.

If you were to do meaningful audit (which we've never done because again we're not willing to adequately track the problem or stand up to dominant incumbent corporations) you'd very likely find that American taxpayers already paid for fiber to every home several times over.

That's not to say is that there aren't things Congress could do to help the disconnected during COVID-19. Libraries for example have been begging the FCC for the ability to offer expanded WiFi hotspot access (via mobile school buses) to disconnected communities without running afoul of FCC ERate rules. But while the FCC said libraries can leave existing WiFi on without penalty, it has been mute about whether they can extend coverage outside of library property. Why? As a captured agency, the FCC doesn't like anything that could potentially result in Comcast or AT&T making less money.

None of this is to say that we shouldn't subsidize broadband deployment once we get a handle on the mapping problem. But it's a fantasy to think we're going to immediately fix a 30 year old problem with an additional $80 billion in a mad dash during a pandemic. US broadband dysfunction was built up over decades. It's the product of corruption and rot that COVID-19 is exposing at every level of the US government. The only way to fix it is to stand up to industry, initiate meaningful reform, adopt policies that drive competition to market, and jettison feckless lawmakers and regulators whose dominant motivation is in protecting AT&T, Verizon, Comcast, and Spectrum revenues.

Maybe the pandemic finally provides the incentive to actually do that, but until the US does, these subsidization efforts are largely theater.




an

Utah Pulls Plug On Surveillance Contractor After CEO's Past As A White Supremacist Surfaces

A couple of months ago, a records request revealed a private surveillance contractor had access to nearly every piece of surveillance equipment owned and operated by the state of Utah. Banjo was the company with its pens in all of the state's ink. Banjo's algorithm ran on top of Utah's surveillance gear: CCTV systems, 911 services, location data for government vehicles, and thousands of traffic cameras.

All of this was run through Banjo's servers, which are conveniently located in Utah government buildings. Banjo's offering is of the predictive policing variety. The CEO claims its software can "find crime" without any collateral damage to privacy. This claim is based on the "anonymization" of harvested data -- a term that is essentially meaningless once enough data is collected.

This partnership is now on the rocks, thanks to an investigation by Matt Stroud and OneZero. Banjo's CEO, Damien Patton, apparently spent a lot of his formative years hanging around with white supremacists while committing crimes.

In grand jury testimony that ultimately led to the conviction of two of his associates, Patton revealed that, as a 17-year-old, he was involved with the Dixie Knights of the Ku Klux Klan. On the evening of June 9, 1990 — a month before Patton turned 18 — Patton and a Klan leader took a semi-automatic TEC-9 pistol and drove to a synagogue in a Nashville suburb. With Patton at the wheel, the Ku Klux Klan member fired onto the synagogue, destroying a street-facing window and spraying bullets and shattered glass near the building’s administrative offices, which were next to that of the congregation’s rabbi. No one was struck or killed in the shooting. Afterward, Patton hid on the grounds of a white supremacist paramilitary training camp under construction before fleeing the state with the help of a second Klan member.

If you're wondering where the state of Utah's due diligence is in all of this, there's a partial explanation for this lapse: the feds, who brought Patton in, screwed up on their paperwork.

Because Patton’s name was misspelled in the initial affidavit of probable cause filed in Brown’s case — an FBI agent apparently spelled Damien with an “o” rather than an “e” — any search of a federal criminal court database for “Damien Patton” would not have surfaced the affidavit.

Now that his past has been exposed, the state of Utah has announced it won't be working with Banjo.

The Utah attorney general’s office will suspend use of a massive surveillance system after a news report showed that the founder of the company behind the effort was once an active participant in a white supremacist group and was involved in the shooting of a synagogue.

The AG's office can only shut down so much of Banjo's surveillance software. Other government agencies not directly controlled by the state AG are making their own judgment calls. The University of Utah is suspending its contract with Banjo, but the state's Department of Public Safety has only gone so far as to "launch a review" of its partnership with the company. City agencies and a number of police departments who have contracts with Banjo have yet to state whether they will be terminating theirs.

And the AG's reaction isn't a ban. The office appears to believe it might be able to work through this.

“While we believe Mr. Patton’s remorse is sincere and believe people can change, we feel it’s best to suspend use of Banjo technology by the Utah attorney general’s office while we implement a third-party audit and advisory committee to address issues like data privacy and possible bias,” Piatt said. “We recommend other state agencies do the same.

It's refreshing to hear a prosecutor state that it's possible for former criminals to turn their lives around and become positive additions to their communities, but one gets the feeling this sort of forgiveness is only extended to ex-cons who have something to offer law enforcement agencies. Everyone else is just their rap sheet for forever, no matter how many years it's been since their last arrest.

The other problem here is the DA's office's tacit admission it did not take data privacy or possible bias into account before granting Banjo access to the state's surveillance equipment, allowing it to set up servers in government buildings, and giving it free rein to dust everything with its unaudited AI pixie dust.

These are all steps that should have taken place before any of this was implemented, even if the state had chosen to do business with a company with a less controversial CEO. This immediate reaction is the right step to take, but a little proactivity now and then would be a welcome change.




an

As More Students Sit Online Exams Under Lockdown Conditions, Remote Proctoring Services Carry Out Intrusive Surveillance

The coronavirus pandemic and its associated lockdown in most countries has forced major changes in the way people live, work and study. Online learning is now routine for many, and is largely unproblematic, not least because it has been used for many years. However, online testing is more tricky, since there is a concern by many teachers that students might use their isolated situation to cheat during exams. One person's problem is another person's opportunity, and there are a number of proctoring services that claim to stop or at least minimize cheating during online tests. One thing they have in common is that they tend to be intrusive, and show little respect for the privacy of the people they monitor.

As an article in The Verge explains, some employ humans to watch over students using Zoom video calls. That's reasonably close to a traditional setup, where a teacher or proctor watches students in an exam hall. But there are also webcam-based automated approaches, as explored by Vox:

For instance, Examity also uses AI to verify students' identities, analyze their keystrokes, and, of course, ensure they're not cheating. Proctorio uses artificial intelligence to conduct gaze detection, which tracks whether a student is looking away from their screens.

It's not just in the US that these extreme surveillance methods are being adopted. In France, the University of Rennes 1 is using a system called Managexam, which adds a few extra features: the ability to detect "inappropriate" Internet searches by the student, the use of a second screen, or the presence of another person in the room (original in French). The Vox articles notes that even when these systems are deployed, students still try to cheat using new tricks, and the anti-cheating services try to stop them doing so:

it's easy to find online tips and tricks for duping remote proctoring services. Some suggest hiding notes underneath the view of the camera or setting up a secret laptop. It's also easy for these remote proctoring services to find out about these cheating methods, so they're constantly coming up with countermeasures. On its website, Proctorio even has a job listing for a "professional cheater" to test its system. The contract position pays between $10,000 and $20,000 a year.

As the arms race between students and proctoring services escalates, it's surely time to ask whether the problem isn't people cheating, but the use of old-style, analog testing formats in a world that has been forced by the coronavirus pandemic to move to a completely digital approach. Rather than spending so much time, effort and money on trying to stop students from cheating, maybe we need to come up with new ways of measuring what they have learnt and understood -- ones that are not immune to cheating, but where cheating has no meaning. Obvious options include "open book" exams, where students can use whatever resources they like, or even abolishing formal exams completely, and opting for continuous assessment. Since the lockdown has forced educational establishments to re-invent teaching, isn't it time they re-invented exams too?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.