. . . . The final huge thing to point out here is Tesla’s approach to full self-driving. You might wonder what’s taking Tesla so long when there are completely autonomous vehicles on the road today from companies like Waymo, which require no human in the driver’s seat.
The reason Waymo can do this is that they use highly detailed pre-built mapsthat “highlight information such as curbs and sidewalks, lane markers, crosswalks, traffic lights, stop signs, and other road features.” This means they can only drive in areas that have been mapped but it gives them a detailed understanding of what the world looks like at the cars current GPS coordinates. They use cameras and lidar sensors to detect other cars, road signs and traffic light colours so the car can drive safely on public roads. . . . full article
A Zocalo Public Square Event – You Tube Video Stream
The world is projected to generate 90 zettabytes of data this year and the next. That’s more than all the data produced since the arrival of computers, and if we still used DVDs, we’d need 19 trillion to store it all. Swimming in this massive sea of information, humans are easily overwhelmed; studies suggest we avoid important information because it might make us miserable, while seeking out information of dubious value to make ourselves happy.
What information do we need to know? What role should policymakers play in helping us find data that improves our well-being and filter out information—from calorie counts to credit card fees—that wastes our time or even endangers us? Harvard University legal scholar Cass Sunstein, author of “Too Much Information: Understanding What You Don’t Want To Know,” visited Zócalo and the Commonwealth Club to explain how we can make information work for us. This online streamed event was moderated by “WIRED” senior editor Lauren Goode. Read more about our panelists here: https://zps.la/3cjL6OA
Forget the idea that China doesn’t care about privacy—its citizens will soon have much greater consumer privacy protections than Americans.
The narrative in the US that the Chinese don’t care about data privacy is simply misguided. It’s true that the Chinese government has built a sophisticated surveillance apparatus (with the help of Western companies), and continues to spy on its citizenry.
But when it comes to what companies can do with people’s information, China is rapidly moving toward a data privacy regime that, in aligning with the European Union’s GDPR, is far more stringent than any federal law on the books in the US. full story / podcast here
Projects take longer. Collaboration is harder. And training new workers is a struggle. ‘This is not going to be sustainable.’
Four months ago, employees at many U.S. companies went home and did something incredible: They got their work done, seemingly without missing a beat. Executives were amazed at how well their workers performed remotely, even while juggling child care and the distractions of home. Twitter Inc. and Facebook Inc., among others, quickly said they would embrace remote work . . . . Read full article here at WSJ
IBM’s CEO says we should reevaluate selling the technology to law enforcement
IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses. full article at The Verge
The suit threatens to upend the business models of Uber and Lyft, which view themselves as tech-y intermediaries between people who want rides and people willing to drive them. An analysis by Barclays estimates that treating California drivers as employees would cost Uber $506 million and Lyft $290 million annually; neither company is profitable.
THE STATE OF California and three of its biggest cities have sued Uber and Lyft for misclassifying hundreds of thousands of drivers as independent contractors, in violation of a new state law. The suit, under a law known as Assembly Bill 5, argues that drivers are company employees, entitled to minimum and overtime wages, paid sick leave, health benefits, and access to social insurance programs like unemployment.
The lawsuit also brings to a head simmering tensions of “gig economy workers, who have been at the front lines of the coronavirus pandemic. Workers at firms that offer shopping or delivery, such as Instacart and Postmates, have complained that their low wages, determined and managed by platform algorithms, don’t accurately reflect the risks they’re taking to deliver people and goods during a public health crisis.
Elsewhere, both companies are reporting huge drops in use of their ride-share service during the pandemic – adding to the precariousness of the business model now that it is under suit as well. Stay tuned . . .
As the epicenter of digital entertainment creation and entertainment jobs, keeping up with tech and entertainment is part of our mandate. With the dawn of AI and the rise of social media, technology is scarier — and more exciting — than ever. Here’s how it’s changing music, TV, sports and more.
Virtual reality is going hyper-real. CGI is bringing back the old stars. A recent concert used no less than 157,000 multidirectional speakers to send the music to its audience! One car company is doing away with speakers and simply turning. the entire car body into a speaker. Machine made music, anyone? AI songwriting is gaining traction. Several artists using a songwriting algorithm called Flow Machines already have appeared in Spotify.
But in addition to innovations and job potential, there are policy questions in need of addressing when it comes to entertainment technology, just as there are in other tech fields. For instance, according to Rolling Stone Magazine, ” Taylor Swift fans mesmerized by rehearsal clips on a kiosk at her May 18, 2019 Rose Bowl show were unaware of one crucial detail: A facial-recognition camera inside the display was taking their photos. The images were being transferred to a Nashville “command post,” where they were cross-referenced with a database of hundreds of the pop star’s known stalkers, according to Mike Downing, chief security officer of Oak View Group, an advisory board for concert venues including Madison Square Garden and the Forum in L.A. “Everybody who went by would stop and stare at it, and the software would start working,” …. Despite the obvious privacy concerns — for starters, who owns those pictures of concertgoers and how long can they be kept on file? — the use of facial-recognition technology is on the rise at stadiums and arenas, and security is not the only goal. . . . “ full article
There are cameras everywhere now, even in billboards you pass. Every time you ID yourself or a friend on Facebook, you have added facial identity information to the cloud. Most airports are using it on all patrons. Security cameras employ it. Facial recognition may make police work faster and easier. But knowing that everyone’s face is being tracked 24/7, which is possible in the near future, could have a detrimental effect on people’s decisions about what they will say, where they will go (think churches, community centers, rallies . . . . ). So, not only does facial recognition have an impact on our rights of privacy, it could impact our rights of assemblage, speech and more. In addition, it is notoriously inaccurate on faces of color and women. This is something that society needs to be addressing together, while there is still time, rather than allowing a free-for-all in use by police, military, commercial security companies, advertising agencies and more. A few cities have banned their use (including San Francisco and Oakland), but there are currently no state or federal laws regulating facial recognition. Watch the video, then consider if policy intervention is needed.