Directive comes as ransomware is exposing the fragility of critical supply chains
The Justice Department has created a task force to centrally track and coordinate all federal cases involving ransomware or related types of cybercrime, such as botnets, money laundering, and bulletproof hosting.
“To ensure we can make necessary connections across national and global cases and investigations… we must enhance and centralize our internal tracking of investigations and prosecutions of ransomware groups and the infrastructure and networks that allow the threats to persist,” Deputy Attorney General Lisa Monaco told US attorneys throughout the country on Thursday. She issued the directive in a memo that was first reported by Reuters. Investigators in field offices around the country would be expected to share information as well. The new directive applies not just to cases or investigations involving ransomware but a host of related scourges, including:
5-21-21: Amazon and others are indefinitely suspending police use of face recognition products, but proposed legislation could make bans bigger or more permanent.
On May 17, Amazon announced it would extend its moratorium indefinitely, joining competitors IBM and Microsoft in self-regulated purgatory. The move is a nod at the political power of the groups fighting to curb the technology—and recognition that new legislative battle grounds are starting to emerge. Many believe that substantial federal legislation is likely to come soon.
“People are exhausted” – The past year has been pivotal for face recognition, with revelations of the technology’s role in false arrests, and bans on it put in place by almost two dozen cities and seven states across the US. But the momentum has been shifting for some time.
In 2018, AI researchers published a study comparing the accuracy of commercial facial recognition software from IBM, Microsoft, and Face++. Their work found that the technology identified lighter-skinned men much more accurately than darker-skinned women; IBM’s system scored the worst, with a 34.4% difference in error rate between the two groups. Also in 2018, the ACLU tested Amazon’s Rekognition and found that it misidentified 28 members of Congress as criminals—an error disproportionately affecting people of color. The organization wrote its own open letter to Amazon, demanding that the company ban government use of the technology, as did the Congressional Black Caucus—but Amazon made no changes. . . . full story here at MIT Technology Review