The White House, lawmakers from both parties, and federal agencies are all working on bills or projects to constrain potential downsides of the tech.
“We want to make sure we’re asking the accountability questions now because our job is going to get more difficult when we encounter AI systems that are more capable,” GAO chief data scientist Taka Ariga says. Despite the recent efforts of lawmakers and officials like Ariga, some policy experts say the US agencies and Congress still need to invest more in adapting to the age of AI.
In a recent report, Georgetown’s CSET outlined scary but plausible “AI accidents” to encourage lawmakers to work more urgently on AI safety research and standards. Its hypothetical disasters included a skin cancer app misdiagnosing Black people at higher rates, leading to unnecessary deaths, or mapping apps steering drivers into the path of wildfires. The Brookings Institution’s director of governance studies, Darrell West, recently called for the revival of the Office of Technology Assessment, shut down 25 years ago, to provide lawmakers with independent research on new technologies such as AI.
Members of Congress from both parties have attempted to bring back the OTA in recent years. They include Takano, who says it could help Congress be more proactive in tackling challenges raised by algorithms. “We need OTA or something like it to help members anticipate where technology is going to challenge democratic institutions, or the justice system, or political stability,” he says. . . . full article at Wired
In 2011, Chinese spies stole the crown jewels of cybersecurity—stripping protections from firms and government agencies worldwide. Here’s how it happened.
“. . . . THE RSA BREACH, when it became public days later, would redefine the cybersecurity landscape. The company’s nightmare was a wake-up call not only for the information security industry—the worst-ever hack of a cybersecurity firm to date—but also a warning to the rest of the world . . . .” full story here at Wired
Many 2021 predictions focus on how specific technologies will impact the way we work and how we perform product development in a COVID and post-COVID world. These trends were foreshadowed by the tech achievements awarded in 2020. The intersection between 2021 predictions and 2020 awards provides interesting insights into such life-changing areas as working-from-home (WFH), cyber-security (e.g., zoom-booming), product development, smart tech in homes and businesses, energy development, and more.
2021 may be remembered for its accelerated transition to a digital workplace, which began in response to the coronavirus pandemic. Digital technology has shown its full potential to both simplify and amplify communication in science, business, and government via video calls, webinars, and virtual events. Overall, we probably gained three to five years in terms of the adoption of and migration to this new normal in 2020.
Under US immigration law, employers must give preference to US workers
Timothy B. Lee- 12/3/2020, for Ars Technica
The United States Department of Justice sued Facebook on Thursday arguing that the social media giant discriminated against US workers by giving preference to Facebook workers on H-1B visas who wanted to transition to permanent jobs at the company.
The H-1B visa program lets foreign workers work at a US company for three years. It can be renewed once. After that, an employer can ask for permission to offer the immigrant a permanent job under the Department of Labor’s PERM certification program. But the employer is supposed to first advertise the job to see if any Americans are available. Only if no qualified Americans apply can the job go to the immigrant.
In its lawsuit, the Justice Department argues that Facebook’s hiring practices made a mockery of these requirements. Most . . . . full story here
. . . . The final huge thing to point out here is Tesla’s approach to full self-driving. You might wonder what’s taking Tesla so long when there are completely autonomous vehicles on the road today from companies like Waymo, which require no human in the driver’s seat.
The reason Waymo can do this is that they use highly detailed pre-built mapsthat “highlight information such as curbs and sidewalks, lane markers, crosswalks, traffic lights, stop signs, and other road features.” This means they can only drive in areas that have been mapped but it gives them a detailed understanding of what the world looks like at the cars current GPS coordinates. They use cameras and lidar sensors to detect other cars, road signs and traffic light colours so the car can drive safely on public roads. . . . full article
A Zocalo Public Square Event – You Tube Video Stream
The world is projected to generate 90 zettabytes of data this year and the next. That’s more than all the data produced since the arrival of computers, and if we still used DVDs, we’d need 19 trillion to store it all. Swimming in this massive sea of information, humans are easily overwhelmed; studies suggest we avoid important information because it might make us miserable, while seeking out information of dubious value to make ourselves happy.
What information do we need to know? What role should policymakers play in helping us find data that improves our well-being and filter out information—from calorie counts to credit card fees—that wastes our time or even endangers us? Harvard University legal scholar Cass Sunstein, author of “Too Much Information: Understanding What You Don’t Want To Know,” visited Zócalo and the Commonwealth Club to explain how we can make information work for us. This online streamed event was moderated by “WIRED” senior editor Lauren Goode. Read more about our panelists here: https://zps.la/3cjL6OA
Forget the idea that China doesn’t care about privacy—its citizens will soon have much greater consumer privacy protections than Americans.
The narrative in the US that the Chinese don’t care about data privacy is simply misguided. It’s true that the Chinese government has built a sophisticated surveillance apparatus (with the help of Western companies), and continues to spy on its citizenry.
But when it comes to what companies can do with people’s information, China is rapidly moving toward a data privacy regime that, in aligning with the European Union’s GDPR, is far more stringent than any federal law on the books in the US. full story / podcast here
Projects take longer. Collaboration is harder. And training new workers is a struggle. ‘This is not going to be sustainable.’
Four months ago, employees at many U.S. companies went home and did something incredible: They got their work done, seemingly without missing a beat. Executives were amazed at how well their workers performed remotely, even while juggling child care and the distractions of home. Twitter Inc. and Facebook Inc., among others, quickly said they would embrace remote work . . . . Read full article here at WSJ
IBM’s CEO says we should reevaluate selling the technology to law enforcement
IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses. full article at The Verge