t

strip for April / 22 / 2020 - Like and Subscribe




t

ComicLab Podcast with Gale Galligan

EPISODE SUMMARY Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, the ComicLab guys talk shop with Gale Galligan, creator of the bestselling Babysitter's Club graphic novels. See all of Gale's latest at Galesaur.com. EPISODE NOTES Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, the ComicLab guys talk shop with Gale Galligan, creator of the bestselling Babysitter's Club graphic novels.




t

strip for April / 24 / 2020 - Yes and...




t

strip for April / 27 / 2020 - Quarks




t

Original Art Up for Grabs!

Friends! I put a fun piece of original art up on eBay, starting at one penny: Go snag it! --> https://www.ebay.com/itm/293555003346




t

strip for April / 30 / 2020 - Thunderdome




t

What's the word on word balloons?

EPISODE SUMMARY This week, Dave and Brad drill down deep on a crucial topic in comics — word balloons! Then Dave talks about his recent Kickstarter, and why it didn't fund. Plus... how will the global pandemic affect comic ships, distributors, the USPS and ... webcomics? EPISODE NOTES Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, Dave and Brad drill down deep on a crucial topic in comics — word balloons! Then Dave talks




t

strip for May / 1 / 2020 - Megatron




t

strip for May / 4 / 2020 - Two Strategies




t

strip for May / 6 / 2020 - Family Secrets




t

This Week's ComicLab Podcast!

EPISODE SUMMARY This week, Dave and Brad talk about the best Content Management System (CMS) for publishing webcomics. Toocheke is brand new, and Brad's a big fan. EPISODE NOTES Today's show is brought to you by Wacom — makers of the incredible Wacom One! This week, Dave and Brad talk about the best Content Management System (CMS) for publishing webcomics. Toocheke is brand new, and Brad's a big fan. Questions asked and topics covered... Toocheke i




t

HANA – JONI MITCHELL

Hana steps out of a storm Into a stranger’s warm, but Hard-up kitchen. She sees what must be done So she takes off her coat Rolls up her sleeves And starts pitchin’ in. Hana has a special knack For getting people back on the right track ‘Cause she knows They all matter So she doesn’t […]




t

down 2 u – Joni Mitchell

Everything comes and goes Marked by lovers and styles of clothes Things that you held high And told yourself were true Lost or changing as the days come down to you Down to you Constant stranger You’re a kind person You’re a cold person too It’s down to you It all comes down to you […]




t

Joni Mitchell

“The Fiddle And The Drum” And so once again My dear Johnny my dear friend And so once again you are fightin’ us all And when I ask you why You raise your sticks and cry, and I fall Oh, my friend How did you come To trade the fiddle for the drum You say […]




t

My CNN editorial, how it all came to be

  So I wrote an op-ed about the recent Macmillan/ebooks kerfuffle for CNN. Here’s how that all worked…. I got...




t

Ask A Librarian: What About Controlled Digital Lending?

From a friend: Please explain to me your enthusiasm for controlled digital lending. Please let me know what you think...




t

Ask a Librarian: Older person wanting to learn about tech

Subtitled: What’s the Yahoo! Internet Life for this generation? From a friend: A nice older lady asked for advice on...




t

2019 reading list and commentary

I started 132 books this year and finished 127. New this year: a twitter thread of everything I read in...




t

Our Library Associations

I’ve been spending some of the wintertime outlasting the blues and making sure that Wikipedia’s got entries for every state...




t

Ask A Librarian: What is the deal with “free” ebook sites?

It’s been an odd set of months. I got busy with Drop-In Time and then very un-busy. I’ve been keeping...




t

Stop-And-Think

I hold these truths to be self-evident: Any law whatsoever, no matter how desirable on balance, will impose some costs on someone somewhere. In any society with more than about 12 people, it is virtually certain that those costs will be borne unequally. If the costs are borne unequally, then the costs borne by various […]




t

The Rules of Excommunication

If Bernie Sanders wants to say that Fidel Castro occasionally did something good, while acknowledging that he often did things that were very bad, I think that’s a reasonable position. (It might also be reasonable to say that Adolf Hitler occasionally did something good, though offhand I can’t think of a good example.) But surely […]




t

Is It A Crime to Stop the Economy?

[I am happy to turn this space over to my former colleague and (I trust) lifelong friend Romans Pancs, who offers what he describes as a polemical essay. It has no references and no confidence intervals. It has question marks. It makes a narrow point and does not weigh pros and cons. It is an […]




t

Animation

You should watch this: Click here for full-size version. Click here to comment or read others’ comments.




t

Current Events

The poster to the left hangs on the wall of my office. Can you figure out the pattern to the sequence? Now you can estimate the size of the nth entry? John Horton Conway died yesterday, a victim of Covid-19. His unique mathematical style combined brilliance and playfulness in equal measure. I came across his […]




t

Goofus, Gallant and the Law

I. Why do some people sign up to have their brains frozen for possible future resurrection, while others don’t? You might think it’s because the first group has more faith in future technology, but Scott Alexander has survey data to suggest otherwise. Active members of the forum lesswrong.com, many of whom had pre-paid for brain […]




t

Aftermath

The victors in last week’s crossword challenge were: First place, with a score of 276/276: A tie between Dan Williams and Richard Kennaway. Second place, but heartbreakingly close, with a score of 274/276: Another tie, between Tim Goodwyn and the team of Dan Grayson & Carol Livingstone Third place: Paul Epps Fourth Place: Eric Dinsdale […]




t

Are Old Lives Worth Less?

For cost-benefit analysis, the usual ballpark figure for the value of a life is about $10,000,000. But I keep hearing it suggested that when it comes to fighting a disease like Covid-19, which mostly kills the elderly, this value is too high. In other words, an old life is worth less than a young life. […]




t

Конкурс "Лучший пользователь мая" с "Эксмо" на MyCharm.ru

Несмотря ни на что, жизнь продолжается, и наступает май - один из волшебных месяцев года. Во многих регионах и странах люди по-прежнему остаются дома, а это значит, что у всех дел есть шанс быть переделанными, и можно посвятить больше времени любимым сайтам.




t

PHP 7.2.30 Release Announcement - PHP: Hypertext Preprocessor

The PHP development team announces the immediate availability of PHP 7.2.30. This is a security release.All PHP 7.2 users are encouraged to upgrade to this version.For source downloads of PHP 7.2.30 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.




t

Platform.sh + Lando: local dev in perfect sync with the cloud - platform.sh

Platform.sh removes a major pain point for developers: having to invest time in managing servers, virtual machines, or containers. Instead, Platform.sh enables developers to focus 100% of their time on their code. Since the beginning, Platform.sh has provided instant cloning capability, so dev teams can work on perfect copies of their production sites in the cloud for every Git branch. Now, in partnership with Lando, we’re extending that capability to the desktop.




t

Switching phubb's HTTP client - Christian Weiske

phubb is a WebSub hub that notifies subscribers in realtime when your website is updated.

Up to this year, phubb sent HTTP requests (GET + POST) with file_get_contents() and a HTTP stream context - see my previous example.

But then I needed a 100% correct way of detecting a page's Hub URL, and copied the code from phinde, my blog search engine. With that I introduced a dependency to PEAR's good old HTTP_Request2 library and I decided to use that library for all requests.

Unfortunately, now the problems began: During development I got an error in about one of 10-20 requests on my machine and could not find the cause:

PHP Fatal error:  Uncaught HTTP_Request2_MessageException: Malformed response:  in HTTP/Request2/Adapter/Socket.php on line 1019

#0 HTTP/Request2/Adapter/Socket.php(1019): HTTP_Request2_Response->__construct('', true, Object(Net_URL2))
#1 HTTP/Request2/Adapter/Socket.php(136): HTTP_Request2_Adapter_Socket->readResponse()
#2 HTTP/Request2.php(946): HTTP_Request2_Adapter_Socket->sendRequest(Object(phubbHttpRequest))
#3 phubb/src/phubb/HttpRequest.php(22): HTTP_Request2->send()
#4 phubb/src/phubb/Task/Publish.php(283): phubbHttpRequest->send()
#5 phubb/src/phubb/Task/Publish.php(248): phubbTask_Publish->fetchTopic(Object(phubbModel_Topic))
#6 phubb/src/phubb/Task/Publish.php(77): phubbTask_Publish->checkTopicUpdate('http://push-tes...')
#7  in HTTP/Request2/Response.php on line 215

The socket adapter has this problem, and I did not want to try to debug that strange problem. (No idea if the cURL one has it; I do not want to rely on php-curl). Finding a new HTTP library was the only option.

New HTTP library

The PHP Framework Interop Group has several HTTP-related proposals; one of them PSR-18: HTTP Client. Now that we have a standardized way to send HTTP requests in 2020, I should use a library that implements it.

The psr-18 topic on Github listed some clients:

Symfony's HTTP client was among them, and it provides a mock client for unit tests! Unfortunately, it also introduces a million dependencies.

There were two others that looked ok-ish on first sight (diciotto and http-client-curl) but both of them had no mock client, and the latter was even curl only. Again nothing for me.

Then I found PHP-HTTP that promises a standard interface for HTTP clients in PHP, and it supports PSR-18! It even has a socket client that has nearly no dependencies, and a mock client for unit tests. I'll try that one for now.




t

PHP Internals News: Episode 50: The RFC Process - Derick Rethans

PHP Internals News: Episode 50: The RFC Process

In this episode of "PHP Internals News", Henrik Gemal (LinkedIn, Website) asks me about how PHP's RFC process works, and I try to answer all of his questions.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 50. Today I'm talking with Henrik come out after he reached out with a question. You might know that at the end of every podcast, I ask: if you have any questions, feel free to email me. And Henrik was the first person to actually do so within a year and a half's time. For the fun, I'm thinking that instead of I'm asking the questions, I'm letting Henrik ask the questions today, because he suggested that we should do a podcast about how the RFC process actually works. Henrik, would you please introduce yourself?

Henrik Gemal 0:52

Yeah, my name is Henrik Gemal. I live in Denmark. The CTO of dinner booking which does reservation systems for restaurants. I've been doing a PHP development for more than 10 years. But I'm not coding so much now. Now I'm managing a big team of PHP developers. And I also been involved in the the open source development of Mozilla Firefox.

Derick Rethans 1:19

So usually I prepare the questions, but in this case, Henrik has prepared the questions. So I'll hand over to him to get started with them. And I'll try to do my best to answer the questions.

Henrik Gemal 1:27

I heard a lot about these RFCs. And I was interested in the process of it. So I'm just starting right off here, who can actually do an RFC? Is it anybody on the internet?

Derick Rethans 1:38

Yeah, pretty much. In order to be able to do an RFC, what you would need is you need to have an idea. And then you need access to our wiki system to be able to actually start writing that, well not to write them, to publish it. The RFC process is open for everybody. In the last year and a half or so, some of the podcasts that I've done have been with people that have been contributing to PHP for a long time. But in other cases, it's people like yourself that have an idea, come up, work together with somebody to work on a patch, and then create an RFC out of that. And that's then goes through the whole process. And sometimes they get accepted, and sometimes they don't.

Henrik Gemal 2:16

How technical are the RFCs? Is it like coding? Or is it more like the idea in general?

Derick Rethans 2:23

The idea needs to be there, it needs to be thought out. It needs to have a good reason for why we want to add or change something in PHP. The motivation is almost as important as what the change or addition actually is about. Now, that doesn't always get us here at variable. In my opinion, but that is an important thing. Now with the idea we need to talk about what changes it has on the rest of the ecosystem, whether they are backward compatible breaks in there, how it effects extensions, or sometimes how it effects OPCache. Sometimes considerations have to be taken for that because it's, it's something quite important in the PHP ecosystem. And it is recommended that it comes with a patch, because it's often a lot easier to talk about an implementation than to talk about the idea. But that is not a necessity. There have been quite some RFCs where the idea was there. But it wasn't a patch right away yet. It is less likely that these RFCs will g

Truncated by Planet PHP, read more at the original (another 15224 bytes)




t

Making bugs ex-bugs with Xdebug - platform.sh

Xdebug is an indispensable tool for every PHP developer. PHP’s favorite real-time debugger, it supports breakpoints, more detailed debug output, and deeper introspection of PHP code to determine just what it’s doing (and what it’s doing wrong). Sadly, it comes at a huge cost in performance, though, making it unsuitable for production. Not on Platform.sh, though. Xdebug is now available on all Grid environments, secure and without a performance loss.




t

Creating a simple link registry - Matthias Noback

The problem: if you publish any document as PDF, in print, etc. and the text contains URLs, there is a chance that one day those URLs won't work anymore. There's nothing to do about that, it happens.

Luckily, this is a solved problem. The solution is to link to a stable and trustworthy website, that is, one that you maintain and host (of course, you're trustworthy!). Then in the document you link to that website, and the website redirects visitors to the actual location.

An example: my book contains a link to https://enjoy.gitstore.app/repositories/matthiasnoback/read-with-the-author. When I moved that repository to a new organization on GitHub, this link resulted in a 404 Page not found error. The proper URL is now https://enjoy.gitstore.app/repositories/read-with-the-author/read-with-the-author. Chris from Gitstore was able to save the day by setting up a redirect on their site, but I wanted to make sure this kind of problem would never be a problem for me again.

The ingredients for the solution:

  • A domain name (I registered advwebapparch.com)
  • A simple website that can redirect visitors to the actual locations

I wanted to hook this new website into my existing Docker-based setup which uses Traefik to forward traffic to the right container based on labels. It turns out, with a simple Nginx image and some custom setup we can easily set up a website that is able to redirecting visitors.

The Dockerfile for such an image:

FROM nginx:stable-alpine
COPY default.conf /etc/nginx/conf.d/default.conf

Where default.conf looks like this:

server {
    listen 80 default_server;
    index index.html;
    root /srv;

    error_page 404 /404.html;

    rewrite /repository https://enjoy.gitstore.app/repositories/read-with-the-author/read-with-the-author redirect;
}

This already works, and when I deploying the resulting image to the server that receives traffic for advwebapparch.com, a request for /repository will indeed redirect a visitor to https://enjoy.gitstore.app/repositories/read-with-the-author/read-with-the-author using a temporary redirect.

Generating the Nginx configuration from a text file

When I'm working on my book, I don't want to manually update a server configuration file every time I'm adding a URL. Instead, I'd like to work with a simple text file. Let's name this file forwards.txt:

/repository https://enjoy.gitstore.app/repositories/read-with-the-author/read-with-the-author
/blog https://matthiasnoback.nl

And then I want the Docker image build process to add rewrite rules automatically, So I wrote a little PHP script that does this runs during the build. Here's what the Dockerfile looks like. It uses a multi-stage build:

FROM php:7.4-alpine as php
# This will copy build.php from the build context to the image
COPY . .
# This will generate default.conf based on template.conf
RUN php build.php

FROM nginx:stable-alpine
# Copy the default.conf from the php image to the nginx image
COPY --from=php default.conf /etc/nginx/conf.d/default.conf

Here's what happens inside the PHP script:

function insertRewritesInNginxConf(string $conf): string
{
    $rewrites = [];

    foreach (file('forwards.txt') as $line) {
        $line = trim($line);
        if (empty($line)) {
            continue;
        }

        $rewrites[] = '    ' . 'rewrite ' . $line . ' redirect;';
    }

    return str_replace(
        '%INSERT_URL_REWRITES_HERE%',
        implode("
", $rewrites),
        $conf
    );
}

/*
 * Generate the Nginx configuration which includes all the actual
 * redirect instructions
 */
file_put_contents(
    'default.conf',
    insertRewritesInNginxConf(file_get_contents('template.conf'))
);

We should add a bit of validation for the data from the forwards.txt file so we don't end up with a broken Nginx configuration, but otherwise, this works just fine.

I don't want to manually check that all the links that are inside the "link registry" still work. Instead, I'd like to use Oh Dear for that, which does uptime monitoring and checks for broken links as well.

For this purpose I added another function to the PHP script, which, based

Truncated by Planet PHP, read more at the original (another 1844 bytes)




t

PHP Internals News: Episode 51: Object Ergonomics - Derick Rethans

PHP Internals News: Episode 51: Object Ergonomics

In this episode of "PHP Internals News" I talk with Larry Garfield (Twitter, Website, GitHub) about a blog post that he was written related to PHP's Object Ergonomics.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 51. Today I'm talking with Larry Garfield, not about an RFC for once, but about a blog post that he's written called Object Ergonomics. Larry, would you please introduce yourself?

Larry Garfield 0:38

Hello World. My name is Larry Garfield, also Crell, CRELL, on various social medias. I work at platform.sh in developer relations. We're a continuous deployment cloud hosting company. I've been writing PHP for 21 years and been a active gadfly and nudge for at least 15 of those.

Derick Rethans 1:01

In the last couple of months, we have seen quite a lot of smaller RFCs about all kinds of little features here and there, to do with making the object oriented model of PHP a little bit better. I reckon this is also the nudge behind you writing a slightly longer blog post titled "Improving PHP object ergonomics".

Larry Garfield 1:26

If by slightly longer you mean 14 pages? Yes.

Derick Rethans 1:29

Yes, exactly. Yeah, it took me a while to read through. What made you write this document?

Larry Garfield 1:34

As you said, there's been a lot of discussion around improving PHP's general user experience of working with objects in PHP. Where there's definitely room for improvement, no question. And I found a lot of these to be useful in their own right, but also very narrow and narrow in ways that solve the immediate problem but could get in the way of solving larger problems later on down the line. So I went into this with an attitude of: Okay, we can kind of piecemeal and attack certain parts of the problem space. Or we can take a step back and look at the big picture and say: Alright, here's all the pain points we have. What can we do that would solve not just this one pain point. But let us solve multiple pain points with a single change? Or these two changes together solve this other pain point as well. Or, you know, how can we do this in a way that is not going to interfere with later development that we've talked about. We know we want to do, but isn't been done yet. So how do we not paint ourselves into a corner by thinking too narrow?

Derick Rethans 2:41

It's a curious thing, because a more narrow RFC is likely easier to get accepted, because it doesn't pull in a whole set of other problems as well. But of course, as you say, if the whole idea hasn't been thought through, then some of these things might not actually end up being beneficial. Because it can be combined with some other things to directly address the problems that we're trying to solve, right?

Larry Garfield 3:07

Yeah, it comes down to what are the smallest changes we can make that taken together have the largest impact. That kind of broad picture thinking is something that is hard to do in PHP, just given the way it's structured. So I took a stab at that.

Derick Rethans 3:21

What are the main problems that we should address?

Larry Garf

Truncated by Planet PHP, read more at the original (another 29525 bytes)




t

Xdebug Update: April 2020 - Derick Rethans

Xdebug Update: April 2020

Another monthly update where I explain what happened with Xdebug development in this past month. These will be published on the first Tuesday after the 5th of each month. Patreon supporters will get it earlier, on the first of each month. You can become a patron to support my work on Xdebug. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In March, I worked on Xdebug for about 60 hours, on the following things:

Xdebug 2.9.5

The 2.9.5 release addresses a few bugs. One of them was a follow on from the issue where Xdebug would crash when another extension would run code in PHP's Request Init stage, but only on a second or later request in the same PHP process. As this is not something that's easy to catch with PHP's testing framework that Xdebug uses, this issue slipped through the cracks.

The release fixes another bug, where throwing an exception from within a destructor would crash Xdebug. The fix for this was merely making sure that PHP's internal state is still available:

- if (!(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {
+ if (EG(current_execute_data) && !(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {

Beyond these two crashes, the release also addressed an issue where Xdebug did not always correct catch where executable code could exist for code coverage analyses. Over the last decade, PHP has been getting more and more optimised, with more internal engine instructions. Unfortunately that sometimes means that these are not hooked into by Xdebug, to see whether there could be a line of code that would make use of these opcodes. As this is often very dependent on how developers lay out their code, these issues are often found by them. Luckily, these issues are trivially fixed, as long as I have access to just the file containing that code. I then analyse it with vld to see which opcode (PHP engine instruction) I have missed.

Xdebug 3 and Xdebug Cloud

Most of my time was spend on getting Xdebug Cloud to a state where I can invite select developers to alpha test it. This includes allowing for Xdebug to connect to Xdebug Cloud. There is currently a branch available, but it still lacks the addition of SSL encryption, which is a requirement for allowing safe transport of debug information.

The communications between an IDE and Xdebug through Xdebug Cloud is working, with a few things related to detecting disconnections more reliably still outstanding.

As Xdebug Cloud needs integration in debugging clients (such as PhpStorm, and other IDEs), I have been extending the dbgpProxy tool to act as intermediate link between existing IDEs and Xdebug Cloud without IDEs having to change anything. This work is still ongoing, and is not documented yet, but I hope to finish that in the next week. Once that and SSL support in the Xdebug to Xdebug Cloud communication has been finalized, I will reach out to subscribers of the Xdebug Cloud newsletter to see if anybody is interested in trying it out.

Podcast

The PHP Internals News continues its second season. Episodes in the last month included a discussion on PHP 8's JIT engine and increasing complexity,

Truncated by Planet PHP, read more at the original (another 720 bytes)





t

PHP Internals News: Episode 52: Floats and Locales - Derick Rethans

PHP Internals News: Episode 52: Floats and Locales

In this episode of "PHP Internals News" I talk with George Banyard (Website, Twitter, GitHub, GitLab) about an RFC that he has proposed together with Máté Kocsis (Twitter, GitHub, LinkedIn) to make PHP's float to string logic no longer use locales.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 52. Today I'm talking with George Banyard about an RFC that he's made together with Mate Kocsis. This RFC is titled locale independent floats to string. Hello, George, would you please introduce yourself?

George Banyard 0:39

Hello, I'm George Peter Banyard. I'm a student at Imperial College and I work on PHP in my free time.

Derick Rethans 0:47

All right, so we're talking about local independent floats. What is the problem here?

George Banyard 0:52

Currently when you do a float to string conversion, so all casting or displaying a float, the conversion will depend on like the current local. So instead of always using like the decimal dot separator. For example, if you have like a German or the French locale enabled, it will use like a comma to separate like the decimals.

Derick Rethans 1:14

Okay, I can understand that that could be a bit confusing. What are these locales exactly?

George Banyard 1:20

So locales, which are more or less C locales, which PHP exposes to user land is a way how to change a bunch of rules on how string and like stuff gets displayed on the C level. One of the issues with it is that like it's global. For example, if you use like a thread safe API, if you use the thread safe PHP version, then set_locale() is not thread safe, so we'll just like impact other threads where you're using it.

Derick Rethans 1:50

So a locale is a set of rules to format specific things with floating point numbers being one of them in which situations does the locale influence the display a floating point numbers in every situation in PHP or only in some?

George Banyard 2:06

Yes, it only impacts like certain aspects, which is quite surprising. So a string cast will affect it the strval() function, vardump(), and debug_zval_dump() will all affect the decimal locator and also printf() with the percentage lowercase F, but that's expected because it's locale aware compared to the capital F modifier.

Derick Rethans 2:32

But it doesn't, for example, have the same problem in the serialised function or say var_export().

George Banyard 2:37

Yeah, and json_encode() also doesn't do that. PDO has special code which handles also this so that like all the PDO drivers get like a constant treat like float string, because that could like impact on the databases.

Derick Rethans 2:53

How is it a problem that with some locales enabled and then uses a comma instead of the decimal point. How can this cause bugs and PHP applications?

Truncated by Planet PHP, read more at the original (another 17468 bytes)




t

'Job Creating' Sprint T-Mobile Merger Triggers Estimated 6,000 Non-Covid Layoffs

Back when T-Mobile and Sprint were trying to gain regulatory approval for their $26 billion merger, executives repeatedly promised the deal would create jobs. Not just a few jobs, but oodles of jobs. Despite the fact that US telecom history indicates such deals almost always trigger mass layoffs, the media dutifully repeated T-Mobile and Sprint executive claims that the deal would create "more than 3,500 additional full-time U.S. employees in the first year and 11,000 more people by 2024."

About that.

Before the ink on the deal was even dry, T-Mobile began shutting down its Metro prepaid business and laying off impacted employees. When asked about the conflicting promises, T-Mobile refused to respond to press inquiries. Now that shutdown has accelerated, with estimates that roughly 6,000 employees at the T-Mobile subsidiary have been laid off as the freshly-merged company closes unwanted prepaid retailers. T-Mobile says the move, which has nothing to do with COVID-19, is just them "optimizing their retail footprint." Industry insiders aren't amused:

"Peter Adderton, the founder of Boost Mobile in Australia and in the U.S. who has been a vocal advocate for the Boost brand and for dealers since the merger was first proposed, figures the latest closures affect about 6,000 people. He cited one dealer who said he has to close 95 stores, some as early as May 1.

In their arguments leading up to the merger finally getting approved, executives at both T-Mobile and Sprint argued that it would not lead to the kind of job losses that many opponents were predicting. They pledged to create jobs, not cut them.

“The whole thing is exactly how we called it, and no one is calling them out. It’s so disingenuous,” Adderton told Fierce, adding that it’s not because of COVID-19. Many retailers in other industries are closing stores during the crisis but plan to reopen once it’s safe to do so."

None of this should be a surprise to anybody. Everybody from unions to Wall Street stock jocks had predicted the deal would trigger anywhere between 15,000 and 30,000 layoffs over time as redundant support, retail, and middle management positions were eliminated. It's what always happens in major US telecom mergers. There is 40 years of very clear, hard data speaking to this point. Yet in a blog post last year (likely to be deleted by this time next year), T-Mobile CEO John Legere not only insisted layoffs would never happen, he effectively accused unions, experts, consumer groups, and a long line of economists of lying:

"This merger is all about creating new, high-quality, high-paying jobs, and the New T-Mobile will be jobs-positive from Day One and every day thereafter. That’s not just a promise. That’s not just a commitment. It’s a fact....These combined efforts will create nearly 5,600 new American customer care jobs by 2021. And New T-Mobile will employ 7,500+ more care professionals by 2024 than the standalone companies would have."

That was never going to happen. Less competition and revolving door, captured regulators and a broken court system means there's less than zero incentive for T-Mobile to do much of anything the company promised while it was wooing regulators. And of course such employment growth is even less likely to happen under a pandemic, which will provide "wonderful" cover for cuts that were going to happen anyway.

Having watched more telecom megadeals like this than I can count, what usually happens is the companies leave things generally alone for about a year to keep employees calm and make it seem like deal critics were being hyperbolic. Then, once the press and public is no longer paying attention (which never takes long), the hatchets come out and the downsizing begins. When the layoffs and reduced competition inevitably arrives, they're either ignored or blamed on something else. In this case, inevitably, COVID-19.

In a few years, the regulators who approved the deal will have moved on to think tank, legal or lobbying positions at the same companies they "regulated." The same press that over-hyped pre-merger promises won't follow back up, because there's no money in that kind of hindsight policy reporting or consumer advocacy. And executives like John Legere (who just quit T-Mobile after selling his $17.5 million NYC penthouse to Giorgio Armani) are dutifully rewarded, with the real world market and human cost of mindless merger mania quickly and intentionally forgotten.




t

Hedge Fund 'Asshole' Destroying Local News & Firing Reporters Wants Google & Facebook To Just Hand Him More Money

Have you heard of Heath Freeman? He's a thirty-something hedge fund boss, who runs "Alden Global Capital," which owns a company misleadingly called "Digital First Media." His business has been to buy up local newspapers around the country and basically cut everything down to the bone, and just milk the assets for whatever cash they still produce, minus all the important journalism stuff. He's been called "the hedge fund asshole", "the hedge fund vampire that bleeds newspapers dry", "a small worthless footnote", the "Gordon Gecko" of newspapers and a variety of other fun things.

Reading through some of those links above, you find a standard playbook for Freeman's managing of newspapers:

These are the assholes who a few years ago bought the Denver Post, once one of the best regional newspapers in the country, and hollowed it out into a shell of its former self, then laid off some more people. Things got so bad that the Post’s own editorial board rebelled, demanding that if “Alden isn’t willing to do good journalism here, it should sell the Post to owners who will.”

And here's one of the other links from above telling a similar story:

The Denver newsroom was hardly alone in its misery. In Northern California, a combined editorial staff of 16 regional newspapers had reportedly been slashed from 1,000 to a mere 150. Farther down the coast in Orange County, there were according to industry analyst Ken Doctor, complained of rats, mildew, fallen ceilings, and filthy bathrooms. In her Washington Post column, media critic Margaret Sullivan called Alden “one of the most ruthless of the corporate strip-miners seemingly intent on destroying local journalism.”

And, yes, I think it's fair to say that many newspapers did get a bit fat and happy with their old school monopolistic hold on the news market pre-internet. And many of them failed to adapt. And so, restructuring and re-prioritizing is not a bad idea. But that's not really what's happening here. Alden appears to be taking profitable (not just struggling) newspapers, and squeezing as much money out of them directly into Freeman's pockets, rather than plowing it back into actual journalism. And Alden/DFM appears to be ridiculously profitable for Freeman, even as the journalism it produces becomes weaker and weaker. Jim Brady called it "combover journalism." Basically using skeleton staff to pretend to really be covering the news, when it's clear to everyone that it's not really doing the job.

All of that is prelude to the latest news that Freeman, who basically refuses to ever talk to the media, has sent a letter to other newspaper bosses suggesting they collude to force Google and Facebook to make him even richer.

You can see the full letter here:


Let's go through this nonsense bit by bit, because it is almost 100% nonsense.

These are immensely challenging times for all of us in the newspaper industry as we balance the two equally important goals of keeping the communities we serve fully informed, while also striving to safeguard the viability of our news organizations today and well into the future.

Let's be clear: the "viability" of your newsrooms was decimated when you fired a huge percentage of the local reporters and stuffed the profits into your pockets, rather than investing in the actual product.

Since Facebook was founded in 2004, nearly 2,000 (one in five) newspapers have closed and with them many thousands of newspaper jobs have been lost. In that same time period, Google has become the world's primary news aggregation service, Apple launched a news app with a subsription-based tier and Twitter has become a household name by serving as a distribution service for the content our staffs create.

Correlation is not causation, of course. But even if that were the case, the focus of a well-managed business would be to adapt to the changing market place to take advantage of, say, new distribution channels, new advertising and subscription products, and new ways of building a loyal community around your product. You know, the things that Google, Facebook and Twitter did... which your newspaper didn't do, perhaps because you fired a huge percentage of their staff and re-directed the money flow away from product and into your pocket.

Recent developments internationally, which will finally require online platforms to compensate the news industry are encouraging. I hope we can collaborate to move this issue forward in the United States in a fair and productive way. Just this month, April 2020, French antitrust regulators ordered Google to pay news publishers for displaying snippets of articles after years of helping itself to excerpts for its news service. As regulators in France said, "Google's practices caused a serious and immediate harm to the press sector, while the economic situation of publishers and news agencies is otherwise fragile." The Australian government also recently said that Facebook and Google would have to pay media outlets in the country for news content. The country's Treasurer, Josh Frydenberg noted "We can't deny the importance of creating a level playing field, ensuring a fair go for companies and the appropriate compensation for content."

We have, of course, written about both the plans in France as well as those in Australia (not to mention a similar push in Canada that Freeman apparently missed). Of course, what he's missing is... well, nearly everything. First, the idea that it's Google that's causing problems for the news industry is laughable on multiple fronts.

If newspapers feel that Google is causing them harm by linking to them and sending them traffic, then they can easily block Google, which respects robots.txt restrictions. I don't see Freeman's newspaper doing that. Second, in most of the world, Google does not monetize its Google News aggregation service, so the idea that it's someone making money off of "their" news, is not supported by reality. Third, the idea that "the news" is "owned" by the news organizations is not just laughable, but silly. After all, the news orgs are not making the news. If Freeman is going to claim that news orgs should be compensated for "their" news, then, uh, shouldn't his news orgs be paying the actual people who make the news that they're reporting on? Or is he saying that journalism is somehow special?

Finally, and most importantly, he says all of this as if we haven't seen how these efforts play out in practice. When Germany passed a similar law, Google ended up removing snippets only to be told they had to pay anyway. Google, correctly, said that if it had to license snippets, it would offer a price of $0, or it would stop linking to the sites -- and the news orgs agreed. In Spain, where Google was told it couldn't do this, the company shut down Google News and tons of smaller publications were harmed, not helped, but this policy.

This surely sounds familiar to all of us. It's been more than a decade since Rupert Murdoch instinctively observerd: "There are those who think they have a right to take our news content and use it for their own purposes without contributing a penny to its production... Their almost wholesale misappropriation of our stories is not fair use. To be impolite, it's theft."

First off, it's not theft. As we pointed out at the time, Rupert Murdoch, himself, at the very time he was making these claims, owned a whole bunch of news aggregators himself. The problem was never news aggregators. The problem has always been that other companies are successful on the internet and Rupert Murdoch was not. And, again, the whole "misappropriation" thing is nonsense: any news site is free to block Google's scrapers and if it's "misappropriation" to send you traffic, why do all of these news organizations employ "search engine optimizers" who work to get their sites higher in the rankings? And, yet again, are they paying the people who make the actual news? If not, then it seems like they're full of shit.

With Facebook and Google recently showing some contrition by launching token programs that provide a modest amount of funding, it's heartening to see that the tech giants are beginning to understand their moral and social responsibility to support and safeguard local journalism.

Spare me the "moral and social responsibility to support and safeguard local journalism," Heath. You're the one who cut 1,000 journalism jobs down to 150. Not Google. You're the one who took profitable newspapers that were investing in local journalism, fired a huge number of their reporters and staff, and redirected the even larger profits into your pockets instead of local journalism.

Even if someone wants to argue this fallacy, it should not be you, Heath.

Facebook created the Facebook Journalism Project in 2017 "to forge stronger ties with the news industry and work with journalists and publishers." If Facebook and the other tech behemoths are serious about wanting to "forge stronger ties with the news industry," that will start with properly remunerating the original producers of content.

Remunerating the "original producers"? So that means that Heath is now agreeing to compensate the people who create the news that his remaining reporters write up? Oh, no? He just means himself -- the middleman -- being remunerated directly into his pocket while he continues to cut jobs from his newsroom while raking in record profits? That seems... less compelling.

Facebook, Google, Twitter, Apple News and other online aggregators make billions of dollars annually from original, compelling content that our reporters, photographers and editors create day after day, hour after hour. We all know the numbers, and this one underscores the value of our intellectual property: The New York Times reported that in 2018, Google alone conservatively made $4.7 billion from the work of news publishers. Clearly, content-usage fees are an appropriate and reasonable way to help ensure newspapers exist to provide communities across the country with robust high-quality local journalism.

First of all, the $4.7 billion is likely nonsense, but even if it were accurate, Google is making that money by sending all those news sites a shit ton of traffic. Why aren't they doing anything reasonable to monetize it? And, of course, Digital First Media has bragged about its profitability, and leaked documents suggest its news business brought in close to a billion dollars in 2017 with a 17% operating margin, significantly higher than all other large newspaper chains.

This is nothing more than "Google has money, we want more money, Google needs to give us the money." There is no "clearly" here and "usage fees" are nonsense. If you don't want Google's traffic, put up robots.txt. Google will survive, but your papers might not.

One model to consider is how broadcast television stations, which provide valuable local news, successfully secured sizable retransmission fees for their programming from cable companies, satellite providers and telcos.

There are certain problems with retransmission fees in the first place (given that broadcast television was, by law, freely transmitted over the air in exchange for control over large swaths of spectrum), and the value they got was in having a large audience to advertise too. But, more importantly, retransmission involved taking an entire broadcast channel and piping it through cable and satellite to make things easier for TV watchers who didn't want to switch between an antenna and a cable (or satellite receiver). An aggregator is not -- contrary to what one might think reading Freeman's nonsense -- retransmitting anything. It's linking to your content and sending you traffic on your own site. The only things it shows are a headline and (sometimes) a snippet to attract more traffic.

There are certainly other potential options worth of our consideration -- among them whether to ask Congress about revisiting thoughtful limitations on "Fair Use" of copyrighted material, or seeking judicial review of how our trusted content is misused by others for their profit. By beginning a collective dialogue on these topics we can bring clarity around the best ways to proceed as an industry.

Ah, yes, let's throw fair use -- the very thing that news orgs regularly rely on to not get sued into the ground -- out the window in an effort to get Google to funnel extra money into Heath Freeman's pockets. That sounds smart. Or the other thing. Not smart.

And "a collective dialogue" in this sense appears to be collusion. As in an antitrust violation. Someone should have maybe mentioned that to Freeman.

Our newspaper brands and operations are the engines that power trust local news in communities across the United States.

Note that it's the brands and operations -- not journalists -- that he mentions here. That's a tell.

Fees from those who use and profit from our content can help continually optimize our product as well as ensure our newsrooms have the resources they need.

Again, Digital First Media, is perhaps the most profitable newspaper chain around. And it just keeps laying off reporters.

My hope is that we are able to work together towards the shared goal of protecting and enhancing local journalism.

You first, Heath, you first.

So, basically, Heath Freeman, who has spent decade or so buying up profitable newspapers, laying off a huge percentage of their newsrooms, leaving a shell of a husk in their place, then redirecting the continued profits (often that exist solely because of the legacy brand) into his own pockets rather than in journalism... wants the other newspapers to collude with him to force successful internet companies who send their newspapers a ton of free traffic to pay him money for the privilege of sending them traffic.

Sounds credible.




t

Daily Deal: The 2020 Ultimate Work From Home Starter Kit

The 2020 Ultimate Work From Home Starter Kit has 16 courses designed to help you do your best while you work from home. There are courses on how to start your own business, how to become a freelancer or copywriter, and how to do affiliate marketing. Other courses focus on how to increase your productivity, how to lead virtual meetings, how to work with virtual teams, and more. It's on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.




t

Appeals Court Says Prosecutors Who Issued Fake Subpoenas To Crime Victims Aren't Shielded By Absolute Immunity

For years, the Orleans Parish District Attorney's Office in Louisiana issued fake subpoenas to witnesses and crime victims. Unlike subpoenas used in ongoing prosecutions, these were used during the investigation process to compel targets to talk to law enforcement. They weren't signed by judges or issued by court clerks but they did state in bold letters across the top that "A FINE AND IMPRISONMENT MAY BE OPPOSED FOR FAILURE TO OBEY THIS NOTICE."

Recipients of these bogus subpoenas sued the DA's office. In early 2019, a federal court refused to grant absolute immunity to the DA's office for its use of fake subpoenas to compel cooperation from witnesses. The court pointed out that issuing its own subpoenas containing threats of imprisonment bypassed an entire branch of the government to give the DA's office power it was never supposed to have.

Allegations that the Individual Defendants purported to subpoena witnesses without court approval, therefore, describe more than a mere procedural error or expansion of authority. Rather, they describe the usurpation of the power of another branch of government.

The court stated that extending immunity would be a judicial blessing of this practice, rather than a deterrent against continued abuse by the DA's office.

The DA's office appealed. The Fifth Circuit Appeals Court took the case, but it seemed very unimpressed by the office's assertions. Here's how it responded during oral arguments earlier this year:

“Threat of incarceration with no valid premise?” Judge Jennifer Elrod said at one point during arguments. She later drew laughter from some in the audience when she said, “This argument is fascinating.”

“These are pretty serious assertions of authority they did not have,” said Judge Leslie Southwick, who heard arguments with Elrod and Judge Catharina Haynes.

The Appeals Court has released its ruling [PDF] and it will allow the lawsuit to proceed. The DA's office has now been denied immunity twice. Absolute immunity shields almost every action taken by prosecutors during court proceedings. But these fake subpoenas were sent to witnesses whom prosecutors seemingly had no interest in ever having testify in court. This key difference means prosecutors will have to face the state law claims brought by the plaintiffs.

Based upon the pleadings before us at this time, it could be concluded that Defendants’ creation and use of the fake subpoenas was not “intimately associated with the judicial phase of the criminal process,” but rather fell into the category of “those investigatory functions that do not relate to an advocate’s preparation for the initiation of a prosecution or for judicial proceedings.” See Hoog-Watson v. Guadalupe Cty., 591 F.3d 431, 438 (5th Cir. 2009)

[...]

Defendants were not attempting to control witness testimony during a break in judicial proceedings. Instead, they allegedly used fake subpoenas in an attempt to pressure crime victims and witnesses to meet with them privately at the Office and share information outside of court. Defendants never used the fake subpoenas to compel victims or witnesses to testify at trial. Such allegations are of investigative behavior that was not “intimately associated with the judicial phase of the criminal process.”

Falling further outside the judicial process was the DA's office itself, which apparently felt the judicial system didn't need to be included in its subpoena efforts.

In using the fake subpoenas, Individual Defendants also allegedly intentionally avoided the judicial process that Louisiana law requires for obtaining subpoenas.

The case returns to the lower court where the DA's office will continue to face the state law claims it hoped it would be immune from. The Appeals Court doesn't say the office won't ultimately find some way to re-erect its absolute immunity shield, but at this point, it sees nothing on the record that says prosecutors should be excused from being held responsible for bypassing the judicial system to threaten crime victims and witnesses with jail time.




t

Suspected DNC & German Parliament Hacker Used His Name As His Email Password

You may have seen the news reports this week that German prosecutors have issued an arrest warrant for Dmitry Badin for a massive hack of the German Parliament that made headlines in 2016. The reports about the German arrest warrant all mention that German authorities "believe" that Badin is connected to the Russian GRU and its APT28 hacking group.

The folks over at Bellingcat have done their open source intelligence investigation thing, and provided a ton of evidence to show that Badin almost certainly is part of GRU... including the fact that he registered his 2018 car purchase to the public address of a GRU building. This is not the first time this has happened. A few years back, Bellingcat also connected a bunch of people to the GRU -- including some accused of hacking by the Dutch government -- based on leaked car registration info.

There's much, much more in the Bellingcat report, but the final paragraph really stands out. Bellingcat also found Badin -- again, a hacker who is suspected in multiple massive and consequential hacks, including of email accounts -- didn't seem to be all that careful with his own security:

The most surreal absence of “practice-what-you-breach” among GRU hackers might be visible in their lackadaisical attitude to their own cyber protection. In 2018, a large collection of hacked Russian mail accounts, including user name and passwords, was dumped online. Dmitry Badin’s email — which we figured out from his Skype account, which we in turn obtained from his phone number, which we of course got from his car registration — had been hacked. He had apparently been using the password Badin1990. After this, his email credentials were leaked again as part of a larger hack, where we see that he had changed his password from Badin1990 to the much more secure Badin990.

Yes, the password for at least one of his email accounts... was apparently his own last name and the year he was born. The cobbler's kids go shoeless again.




t

Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.




t

Senator Wyden And Others Introduce Bill Calling The DOJ's Bluff Regarding Its Attempt To Destroy Section 230 & Encryption

One of the key points we've been making concerning Attorney General William Barr and his DOJ's eager support for the terrible EARN-IT Act, is that much of it really seems to be to cover up the DOJ's own failings in fighting child porn and child exploitation. The premise behind the EARN IT Act is that there's a lot of child exploitation/child abuse material found on social media... and that social media companies should do more to block that content. Of course, if you step back and think about it, you'd quickly realize that this is a form of sweeping the problem under the rug. Rather than actually tracking down and arresting those exploiting and abusing children, it's demanding private companies just hide the evidence of those horrific acts.

And why might the DOJ and others be so supportive of sweeping evidence under the rug and hiding it? Perhaps because the DOJ and Congress have literally failed to live up to their mandates under existing laws to actually fight child exploitation. Barr's DOJ has been required under law to produce reports showing data about internet crimes against children, and come up with goals to fight those crimes. It has produced only two out of the six reports that were mandated over a decade ago. At the same time, Congress has only allocated a very small budget to state and local law enforcement for fighting internet child abuse. While the laws Congress passed say that Congress should give $60 million to local law enforcement, it has actually allocated only about half of that. Oh, and Homeland Security took nearly half of its "cybercrimes" budget and diverted it to immigration enforcement, rather than fighting internet crimes such as child exploitation.

So... maybe we should recognize that the problem isn't social media platforms, but the fact that Congress and law enforcement -- from local and state up to the DOJ -- have literally failed to do their job.

At least some elected officials have decided to call the DOJ's bluff on why we need the EARN IT Act. Led by Senator Ron Wyden (of course), Senators Kirsten Gillbrand, Bob Casey, Sherrod Brown and Rep. Anna Eshoo have introduced a new bill to actually fight child sex abuse online. Called the Invest in Child Safety Act, it would basically make law enforcement do its job regarding this stuff.

The Invest in Child Safety Act would direct $5 billion in mandatory funding to investigate and target the pedophiles and abusers who create and share child sexual abuse material online. And it would create a new White House office to coordinate efforts across federal agencies, after DOJ refused to comply with a 2008 law requiring coordination and reporting of those efforts. It also directs substantial new funding for community-based efforts to prevent children from becoming victims in the first place.

Basically, the bill would do a bunch of things to make sure that law enforcement is actually dealing with the very real problem of child exploitation, rather than demanding that internet companies (1) sweep evidence under the rug, and (2) break encryption:

  • Quadruple the number of prosecutors and agents in DOJ’s Child Exploitation and Obscenity Section from 30 FTEs to 120 FTEs;
  • Add 100 new agents and investigators for the Federal Bureau of Investigation’s Innocent Images National Initiative, Crimes Against Children Unit, Child Abduction Rapid Deployment Teams, and Child Exploitation and Human Trafficking Task Forces;
  • Fund 65 new NCMEC analysts, engineers, and mental health counselors, as well as a major upgrade to NCMEC’s technology platform to enable the organization to more effectively evaluate and process CSAM reports from tech companies;
  • Double funding for the state Internet Crimes Against Children (ICAC) Task Forces;
  • Double funding for the National Criminal Justice Training Center, to administer crucial Internet Crimes Against Children and Missing and Exploited Children training programs;
  • Increase funding for evidence-based programs, local governments and non-federal entities to detect, prevent and support victims of child sexual abuse, including school-based mental health services and prevention programs like the Children’s Advocacy Centers and the HHS’ Street Outreach Program;
  • Require tech companies to increase the time that they hold evidence of CSAM, in a secure database, to enable law enforcement agencies to prosecute older cases;
  • Establish an Office to Enforce and Protect Against Child Sexual Exploitation, within the Executive Office of the President, to direct and streamline the federal government’s efforts to prevent, investigate and prosecute the scourge of child exploitation;
  • Require the Office to develop an enforcement and protection strategy, in coordination with HHS and GAO; and
  • Require the Office to submit annual monitoring reports, subject to mandatory Congressional testimony to ensure timely execution.
While I always have concerns about law enforcement mission creep and misguided targeting of law enforcement efforts, hopefully everyone can agree that child exploitation does remain a very real problem, and one that law enforcement should be investigating and going after those who are actually exploiting and abusing children. This bill would make that possible, rather than the alternative approach of just blaming the internet companies for law enforcement's failure to take any of this seriously.




t

Fans Port Mario 64 To PC And Make It Way Better, So Of Course Nintendo Is Trying To Nuke The Project

I'm lucky enough to own a decades old Nintendo 64 and a handful of games, including the classic Mario 64. My kids love that game. Still, the first thing they asked when I showed it to them the first time is why the screen was letterboxed, why the characters looked like they were made of lego blocks, and why I needed weird cords to plug it all into the flat screen television. The answer to these spoiled monsters' questions, of course, is that the game is super old and wasn't meant to be played on modern televisions. It's the story of a lot of older games, though many PC games at least have a healthy modding community that will take classics and get them working on present day hardware. Consoles don't have that luxury.

Well, usually, that is. It turns out that enough folks were interested in modernizing Mario 64 that a group of fans managed to pull off porting it to PC. And, because this is a port and not emulation, they managed to update it to run in 4k graphics and added a ton of modern visual effects.

Last year, Super Mario 64's N64 code was reverse-engineered by fans, allowing for all kinds of new and exciting things to be done with Nintendo’s 1996 classic. Like building a completely new PC port of the game, which can run in 4K and ultra-wide resolutions.

This is a very new and cool thing! Previously, if you were playing Super Mario 64 on PC, you were playing via emulation, as your PC ran code pretending to be an N64. This game is made specifically for the PC, built from the ground up, meaning it not only runs like a dream, but even supports mod stuff like ReShade, allowing for graphical tweaks (like the distance blur seen here).

As you'll see, the video the Kotaku post is referencing can't be embedded here because Nintendo already took it down. Instead, I'll use another video that hasn't been taken down at the time of this writing, so you can see just how great this looks.

In addition to videos of the project, Nintendo has also been busy firing off legal salvos to get download links for the PC port of the game taken down from wherever it can find them. Now, while Nintendo's reputation for IP protectionism is such that it would almost certainly take this fan project down under virtually any circumstances, it is also worth noting that the company has a planned re-release of Mario 64 for its latest Nintendo console. That likely only supercharged the speed with which it is trying to disappear this labor of love from fans of an antiquated game that have since moved on to gaming on their PCs.

But why should the company do this? Nintendo consoles are known for many things, including user-friendly gaming and colorful games geared generally towards younger audiences. You know, exactly not the people who would take it on themselves to get an old Mario game working on their PC instead of a Nintendo console. What threat does this PC port from fans represent to Nintendo revenue? It's hard to imagine that threat is anything substantial.

And, yet, here we are anyway. Nintendo, after all, doesn't seem to be able to help itself.




t

Secret Service Sends FOIA Requester A Redacted Version Of A Public DOJ Press Release

The government loves its secrets. It loves them so much it does stupid things to, say, "secure the nation..." or "protect the integrity of deliberative processes" or whatever the fuck. We should not trust the government's reasoning when it chooses to redact information from documents it releases to FOIA requesters. These assertions should always be challenged because the government's track record on redactions is objectively awful.

Here's the latest case-in-point: Emma Best -- someone the government feels is a "vexatious" FOIA filer -- just received a completely stupid set of redactions from the Secret Service. Best requested documents mentioning darknet market Hansa, which was shut down (along with Alpha Bay) following an investigation by US and Dutch law enforcement agencies.

The documents returned to Best contained redactions. This is unsurprising given the nature of the investigation. What's surprising is what the Secret Service decided to redact. As Best pointed out on Twitter, the Secret Service decided public press releases by the DOJ were too sensitive to be released to the general public.

Here's one of the redactions [PDF] the Secret Service applied to a press release that can be found unaltered and unedited at the Justice Department's publicly-accessible website:

And here's what the Secret Service excised, under the bullshit theory that a publicly-released press statement is somehow an "inter-agency or intra-agency memorandums or letter which would not be available by law to a party other than an agency in litigation with the agency."

“This is likely one of the most important criminal investigations of the year – taking down the largest dark net marketplace in history,” said Attorney General Jeff Sessions. “Make no mistake, the forces of law and justice face a new challenge from the criminals and transnational criminal organizations who think they can commit their crimes with impunity using the dark net. The dark net is not a place to hide. The Department will continue to find, arrest, prosecute, convict, and incarcerate criminals, drug traffickers and their enablers wherever they are. We will use every tool we have to stop criminals from exploiting vulnerable people and sending so many Americans to an early grave. I believe that because of this operation, the American people are safer – safer from the threat of identity fraud and malware, and safer from deadly drugs.”

Um. Is Jeff Sessions being Yezhoved by the Secret Service? Does the agency consider him to be enough of a persona non grata after his firing by Trump to be excised from the Secret Services' official recollection of this dark web takedown? This insane conspiracy theory I just made up makes as much sense as anything the Secret Service could offer in explanation for this redaction. The redaction removed nothing but the sort of swaggering statement Attorney Generals always make after a huge bust.

Needless to say, Emma Best is challenging the Secret Service's redactions. Pithily.

I am appealing the integrity of the redactions, as you withheld public press releases under b5, which is grossly inappropriate.

Yeah. That's an understatement. The Secret Service has no business redacting publicly-available info. Even if this was a clerical error, it's so bad it's insulting. And that's why you can't trust the government on things like this: when it's not being malicious, it's being stupid.




t

New AT&T CEO Says You're A Moron If You Don't Use AT&T Streaming Services

Last week AT&T CEO Randall Stephenson stepped down after his $150 billion bid to dominate the video advertising space fell flat on its face. Stephenson's tenure was plagued by no shortage of scandals, though it was his failures on the TV front that likely cost him his comfy seat as one of the highest paid executives in America.

After spending $150 billion on several dubious megamergers (most notably the 2015 purchase of a satellite TV provider DirecTV), Stephenson saddled the company with an ocean of debt. So much debt it was forced to raise rates on customers in the middle of one of the biggest transformational shifts in the TV sectors in decades (cord cutting and the rise of streaming video). And while Stephenson deserves credit for at least trying to get out ahead of the trend, his tenure was pockmarked by a long line of dubious decisions that directly contributed to the company losing more than 3.2 million pay TV subscribers last year alone.

But Stephenson's replacement, AT&T executive John Stankey, doesn't seem much better. In a profile piece last week, Bloomberg described fairly idiotic and cocky recent comments by Stankey as "blunt." Among them was the claim that "nobody knows as much about TV as me," and the insistence that those who don't subscribe to AT&T's confusing assortment of discount TV streaming services must certainly be stupid:

"When pitching AT&T’s new HBO Max streaming platform, he told the audience that anyone unwilling to pay $15 a month for the service had a low IQ. At a town hall with HBO employees last year, Stankey said the network had to dramatically increase its programming output, comparing the work ahead to childbirth. Once, when a Time Warner veteran criticized an idea during a meeting, Stankey replied, “I know more about television than anybody."

Yeah, sounds like just the guy to right the ship, and earn employee and customer respect. Especially for a company plagued with no shortage of hubris that believed it could just bully, bullshit, and bribe its way to industry domination.

One of the major reasons Stephenson was ejected was courtesy of recently hyperactive hedge fund Elliott Management, which holds a massive stake in AT&T. Elliott complained that Stephenson had become megamerger happy and, despite eliminating 37,000 jobs to recoup merger debt (despite billions in regulatory FCC favors and a $42 billion Trump tax cut) wasn't doing enough firing. Reports now suggest that Elliott didn't much like Stankey either, but settled on him after external options proved even more underwhelming:

"Elliott, the hedge fund run by Paul Singer, remains skeptical of incoming CEO John Stankey’s decision-making but has decided his understanding of AT&T’s sprawling assets makes him a better candidate to take over for Stephenson than any external candidate, according to the people...Elliott was skeptical of Stankey’s decision-making as an architect of AT&T’s acquisitions of DirecTV and Time Warner. It advocated that AT&T focus on divesting assets and lowering debt, pushing the largest U.S. wireless company to sell DirecTV, one of the assets Stankey has steadfastly defended."

In short nobody in this drama seems to know what they're actually doing. Few were happy with AT&T's previous leadership. And few seem happy with AT&T's new leadership, who apparently thinks he's a TV sector super genius, and you're a moron if you don't subscribe to AT&T's generally underwhelming TV offerings. Surely this will all go swimmingly.




t

Amazon Sued For Saying You've 'Bought' Movies That It Can Take Away From You

For well over a decade we've talked about the many problems that arise when copyright is compared to "property" -- and people try to simply move over concepts from physical, tangible property into the world of digital. A key aspect of this: when you "purchase" something digital online, is it really a "purchase" or is it a "license" (especially a license that could be revoked)? If it was a true "purchase" then you should own it and the seller shouldn't be able to take it back. But in practice, over and over and over again, we've seen stories of people having things they supposedly "bought" disappear. The situation is so crazy that we've referred to it as Schrödinger's Download, in that many copyright holders and retailers would like the very same thing to be a "sale" some of the time, and a "license" some of the time (the "times" for each tend to be when it hurts the consumers the most). This has, at times, seeped into physical goods, where they've tried to add "license agreements" to physical products. Or, worse, when some copyright folks claimed that buying a DVD means you don't actually own what you bought, but rather are merely "purchasing access" to the content, and that could be revoked.

Anyway, I'm amazed that we don't see more lawsuits about this kind of thing -- but one was recently filed in California. Someone named Amanda Caudel is suing Amazon for saying that you've "purchased" a video download, which Amazon might disappear from your library whenever it wants. As the lawsuit makes clear, Amazon directly says that you are buying the movie (as opposed to renting it). From the lawsuit filing itself:

And, they point out, in your account there's a listing of "Your Video Purchases & Rentals." But, the lawsuit claims, what you purchase doesn't seem to behave like a real purchase:

Reasonable consumers will expect that the use of a “Buy” button and the representation that their Video Content is a “Purchase” means that the consumer has paid for full access to the Video Content and, like any bought product, that access cannot be revoked.

Unfortunately for consumers who chose the “Buy” option, this is deceptive and untrue. Rather, the ugly truth is that Defendant secretly reserves the right to terminate the consumers’ access and use of the Video Content at any time, and has done so on numerous occasions, leaving the consumer without the ability to enjoy their already-bought Video Content.

Defendant’s representations are misleading because they give the impression that the Video Content is purchased – i.e. the person owns it - when in fact that is not true because Defendant or others may revoke access to the Video Content at any time and for any reason.

In so representing the “Purchase” of Video Content as true ownership of the content, Defendant took advantage of the (1) cognitive shortcuts made at the point-of-sale, e.g. Rent v. Buy and (2) price of the Video Content, which is akin to an outright purchase versus a rental.

Though some consumers may get lucky and never lose access to any of their paid-for media, others may one day find that their Video Content is now completely inaccessible. Regardless, all consumers have overpaid for the Video Content because they are not in fact owners of the Video Content, despite have paid extra money to “Buy” the product.

The plaintiff (or rather, her lawyers) are trying to make this a class action lawsuit, and are arguing that (among other things) this is false advertising. I am, not surprisingly, sympathetic to the plaintiff -- and remain disappointed at how copyright and similar restrictions are being used to chip away at ownership and actual property rights. That said... I'm not that optimistic the case will get very far. In the past, companies have been able to wiggle out of similar claims, and I'm pretty sure that Amazon tries to push disputes like this to binding arbitration, meaning that the lawsuit may be dead on arrival.

Still, it's yet another reminder of how copyright is chipping away at real property.