The White House, lawmakers from both parties, and federal agencies are all working on bills or projects to constrain potential downsides of the tech.
“We want to make sure we’re asking the accountability questions now because our job is going to get more difficult when we encounter AI systems that are more capable,” GAO chief data scientist Taka Ariga says. Despite the recent efforts of lawmakers and officials like Ariga, some policy experts say the US agencies and Congress still need to invest more in adapting to the age of AI.
In a recent report, Georgetown’s CSET outlined scary but plausible “AI accidents” to encourage lawmakers to work more urgently on AI safety research and standards. Its hypothetical disasters included a skin cancer app misdiagnosing Black people at higher rates, leading to unnecessary deaths, or mapping apps steering drivers into the path of wildfires. The Brookings Institution’s director of governance studies, Darrell West, recently called for the revival of the Office of Technology Assessment, shut down 25 years ago, to provide lawmakers with independent research on new technologies such as AI.
Members of Congress from both parties have attempted to bring back the OTA in recent years. They include Takano, who says it could help Congress be more proactive in tackling challenges raised by algorithms. “We need OTA or something like it to help members anticipate where technology is going to challenge democratic institutions, or the justice system, or political stability,” he says. . . . full article at Wired
ZipRecruiter, CareerBuilder, LinkedIn—most of the world’s biggest job search sites use AI to match people with job openings. But the algorithms don’t always play fair.
excerpt: For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the position’s requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women.
“You might be recommending, for example, more senior jobs to one group of people than another, even if they’re qualified at the same level,” Jersin says. “Those people might not get exposed to the same opportunities. And that’s really the impact that we’re talking about here.”
Men also include more skills on their résumés at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform.
To address such issues, Jersin and his team at LinkedIn built a new Ai designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.
Kan says Monster, which lists 5 to 6 million jobs at any given time, also incorporates behavioral data into its recommendations but doesn’t correct for bias in the same way that LinkedIn does. Instead, the marketing team focuses on getting users from diverse backgrounds signed up for the service, and the company then relies on employers to report back and tell Monster whether or not it passed on a representative set of candidates. . . . full story here
Directive comes as ransomware is exposing the fragility of critical supply chains
The Justice Department has created a task force to centrally track and coordinate all federal cases involving ransomware or related types of cybercrime, such as botnets, money laundering, and bulletproof hosting.
“To ensure we can make necessary connections across national and global cases and investigations… we must enhance and centralize our internal tracking of investigations and prosecutions of ransomware groups and the infrastructure and networks that allow the threats to persist,” Deputy Attorney General Lisa Monaco told US attorneys throughout the country on Thursday. She issued the directive in a memo that was first reported by Reuters. Investigators in field offices around the country would be expected to share information as well. The new directive applies not just to cases or investigations involving ransomware but a host of related scourges, including:
5-21-21: Amazon and others are indefinitely suspending police use of face recognition products, but proposed legislation could make bans bigger or more permanent.
On May 17, Amazon announced it would extend its moratorium indefinitely, joining competitors IBM and Microsoft in self-regulated purgatory. The move is a nod at the political power of the groups fighting to curb the technology—and recognition that new legislative battle grounds are starting to emerge. Many believe that substantial federal legislation is likely to come soon.
“People are exhausted” – The past year has been pivotal for face recognition, with revelations of the technology’s role in false arrests, and bans on it put in place by almost two dozen cities and seven states across the US. But the momentum has been shifting for some time.
In 2018, AI researchers published a study comparing the accuracy of commercial facial recognition software from IBM, Microsoft, and Face++. Their work found that the technology identified lighter-skinned men much more accurately than darker-skinned women; IBM’s system scored the worst, with a 34.4% difference in error rate between the two groups. Also in 2018, the ACLU tested Amazon’s Rekognition and found that it misidentified 28 members of Congress as criminals—an error disproportionately affecting people of color. The organization wrote its own open letter to Amazon, demanding that the company ban government use of the technology, as did the Congressional Black Caucus—but Amazon made no changes. . . . full story here at MIT Technology Review