Axed plan’s 10Mbps standard could have banned public networks in 98% of Ohio
After coming close to imposing a near-total ban on municipal broadband networks, Ohio’s Republican-controlled legislature has reportedly dropped the proposed law in final negotiations over the state budget. The final budget agreement “axed a proposal to limit local governments from offering broadband services,” The Columbus Dispatch wrote. With a June 30 deadline looming, Ohio’s House and Senate approved the budget and sent it to Gov. Mike DeWine for final approval on Monday night.
As we wrote earlier this month, the Ohio Senate approved a version of the budget containing an amendment that would have forced existing municipal broadband services to shut down and prevented the formation of new public networks. The proposed law was reportedly “inserted without prior public discussion,” and no state senator publicly sponsored the amendment. It was approved in a party-line vote as Democrats opposed the restrictions in municipal broadband. The House version did not contain the amendment, and it was dropped during negotiations between the House and Senate.
“Real grassroots movement”
Lawmakers apparently relented to public pressure from supporters of municipal broadband and cities and towns that operate the networks. People and businesses from Fairlawn, where the city-run FairlawnGig network offers fiber Internet, played a significant role in the protests. FairlawnGig itself asked users to put pressure on lawmakers, and the subscribers did so in great numbers. . . . . full story here at Ars Technica
ZipRecruiter, CareerBuilder, LinkedIn—most of the world’s biggest job search sites use AI to match people with job openings. But the algorithms don’t always play fair.
excerpt: For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the position’s requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women.
“You might be recommending, for example, more senior jobs to one group of people than another, even if they’re qualified at the same level,” Jersin says. “Those people might not get exposed to the same opportunities. And that’s really the impact that we’re talking about here.”
Men also include more skills on their résumés at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform.
To address such issues, Jersin and his team at LinkedIn built a new Ai designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.
Kan says Monster, which lists 5 to 6 million jobs at any given time, also incorporates behavioral data into its recommendations but doesn’t correct for bias in the same way that LinkedIn does. Instead, the marketing team focuses on getting users from diverse backgrounds signed up for the service, and the company then relies on employers to report back and tell Monster whether or not it passed on a representative set of candidates. . . . full story here
With all the ongoing ransomware and cyber-attacks, connected IoT devices need an extra layer of security. New legislation in both Europe and the US are mandating such strengthened security. But what tools are available for embedded IoT engineers to meet these new requirements?
To learn more about providing enhanced protection of connected devices, Design News reached out to Haydn Povey, CEO of Secure Thingz and General Manager for the division Embedded Security Solutions at IAR Systems. What follows is a portion of that discussion.
“The requirements of new legislation for security in IoT devices are impacting us now. With the advent of EN 303 645 and the US IoT, Cyber Security Act signed into law last year, there is now mounting pressure on the Consumer IoT market to meet security standards. However, this is not just limited to Consumer IoT, with regulationsevolving quickly in other markets, such as the IEC 62443 requirement for Industrial IoT (Industry 4.0) and similar requirements in medical and automotive.” . . . full story here
Directive comes as ransomware is exposing the fragility of critical supply chains
The Justice Department has created a task force to centrally track and coordinate all federal cases involving ransomware or related types of cybercrime, such as botnets, money laundering, and bulletproof hosting.
“To ensure we can make necessary connections across national and global cases and investigations… we must enhance and centralize our internal tracking of investigations and prosecutions of ransomware groups and the infrastructure and networks that allow the threats to persist,” Deputy Attorney General Lisa Monaco told US attorneys throughout the country on Thursday. She issued the directive in a memo that was first reported by Reuters. Investigators in field offices around the country would be expected to share information as well. The new directive applies not just to cases or investigations involving ransomware but a host of related scourges, including:
The rise of connected medical devices demands a more proactive approach to cybersecurity.
Connected medical devices have become essential for modern healthcare. Their prevalence has improved healthcare immensely but also brought an increased threat of cyber attacks. Last year saw a 55% increase in cybersecurity attacks on healthcare providers in the United States alone. With patient data, health records, and critical infrastructure at risk—and connected devices only set to become more widespread and complex—the industry needs to reconsider its approach to cybersecurity protection. . . . .
As many healthcare organizations rush to adopt connected solutions, however, many are having to reflect on the cybersecurity implications of connectivity. With HCOs encountering a near 50% increase in cyberattacks by the end of 2020, the need to better address vulnerabilities in digital health systems is more pressing than ever. Cyberattacks aren’t just becoming more frequent, however; they’re also becoming more sophisticated. Recent years have seen a range of new threats come to the fore: 18 zero-day vulnerabilities—codenamed Ripple 20 —were identified recently by Cybersecurity business JSOF, while a range of vulnerabilities in IPNet Software, named URGENT/11, poses a particular threat to the healthcare industry according to FDA. . . . . full story here
5-21-21: Amazon and others are indefinitely suspending police use of face recognition products, but proposed legislation could make bans bigger or more permanent.
On May 17, Amazon announced it would extend its moratorium indefinitely, joining competitors IBM and Microsoft in self-regulated purgatory. The move is a nod at the political power of the groups fighting to curb the technology—and recognition that new legislative battle grounds are starting to emerge. Many believe that substantial federal legislation is likely to come soon.
“People are exhausted” – The past year has been pivotal for face recognition, with revelations of the technology’s role in false arrests, and bans on it put in place by almost two dozen cities and seven states across the US. But the momentum has been shifting for some time.
In 2018, AI researchers published a study comparing the accuracy of commercial facial recognition software from IBM, Microsoft, and Face++. Their work found that the technology identified lighter-skinned men much more accurately than darker-skinned women; IBM’s system scored the worst, with a 34.4% difference in error rate between the two groups. Also in 2018, the ACLU tested Amazon’s Rekognition and found that it misidentified 28 members of Congress as criminals—an error disproportionately affecting people of color. The organization wrote its own open letter to Amazon, demanding that the company ban government use of the technology, as did the Congressional Black Caucus—but Amazon made no changes. . . . full story here at MIT Technology Review
Engineers at Duke University have developed the world’s first fully recyclable printed electronics. Their recycling process recovers nearly 100% of the materials used—and preserves most of their performance capabilities for reuse. By demonstrating a crucial and relatively complex computer component—the transistor—created with three carbon-based inks, the researchers hope to inspire a new generation of recyclable electronics.
“Silicon-based computer components are probably never going away, and we don’t expect easily recyclable electronics like ours to replace the technology and devices that are already widely used,” said Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke. “But we hope that by creating new, fully recyclable, easily printed electronics and showing what they can do, that they might become widely used in future applications.”
Even though the ever-growing pile of discarded electronics is now on the decline, less than a quarter of it each year is recycled, according to a United Nations estimate. Part of the problem is that electronic devices are difficult to recycle. Large plants employ hundreds of workers who hack at bulky devices. But while scraps of copper, aluminum and steel can be recycled, the silicon chips at the heart of the devices cannot. . . . . full story
The largest Internet providers in the US funded a campaign that generated “8.5 million fake comments” to the Federal Communications Commission as part of the ISPs’ fight against net neutrality rules during the Trump administration, according to a report issued today [May 6, 2021] by New York State Attorney General Letitia James.
Nearly 18 million out of 22 million comments were fabricated, including both pro- and anti-net neutrality submissions, the report said. One 19-year-old submitted 7.7 million pro-net neutrality comments under fake, randomly generated names. But the astroturfing effort funded by the broadband industry stood out because it used real people’s names without their consent, with third-party firms hired by the industry faking consent records, the report said.
The NY AG’s office began its investigation in 2017 and said it faced stonewalling from then-FCC Chairman Ajit Pai, who refused requests for evidence. But after a years-long process of obtaining and analyzing “tens of thousands of internal emails, planning documents, bank records, invoices, and data comprising hundreds of millions of records,” the NY AG said it “found that millions of fake comments were submitted through a secret campaign, funded by the country’s largest broadband companies, to manufacture support for the repeal of existing net neutrality rules using lead generators.”
It was clear before Pai completed the repeal in December 2017 that millions of people—including dead people—were impersonated in net neutrality comments. Even industry-funded research found that 98.5 percent of genuine comments opposed Pai’s deregulatory plan. But today’s report reveals more details about how many comments were fake and how the broadband industry was involved. . . .full story at Ars Technica here
“Algorithms have great potential for good. They can also be misused.”
The other panel expert was Tristan Harris, president of the Center for Humane Technology and a former designer at Google. For years, Harris has been vocal about the perils of algorithmically driven media, and his opening remarks didn’t stray from that view. “We are now sitting through the results of 10 years of this psychologically deranging process that have warped our national communications and fragmented the Overton window and the shared reality we need as a nation to coordinate to deal with our real problems.”
One of Harris’ proposed solutions is to subject social media companies to the same regulations that university researchers face when they do psychologically manipulative experiments. “If you compare side-by-side the restrictions in an IRB study in a psychology lab at a university when you experiment on 14 people—you’ve got to file an IRB review. Facebook, Twitter, YouTube, TikTok are on a regular, daily basis tinkering with the brain implant of 3 billion people’s daily thoughts with no oversight. full article here . . . .
With various global and local privacy laws (GDPR & CCPA) and standards in collecting personal data such as email, phone number, cookie or mobile ID, it is going to be more and more important for businesses who plan to advertise to do it in such a way they are more efficient.
Glendale businesses that engage in any type of digital marketing will need to understand what is happening so they can prepare themselves as they engage in any type of digital advertising in the future including working with the big platforms like Facebook and Google.This will be an online event, April 28, 2021 from 2-3pm. Click here for more information about event