Press "Enter" to skip to content

Tech

Apple unveils first Macs built to run more like iPhones

SAN RAMON, Calif. (AP) — Apple is rolling out new Mac computers powered by the same kind of chips that run iPhones and iPads, a move aimed at making it easier for its most popular products to work together.

For instance, Macs using the new chips will be able to run the same apps designed for the iPhone’s mobile operating system, although it appears some developers aren’t immediately keen on making those apps available for Macs. Apple didn’t demonstrate any other interoperability features based on the new chips, although analysts expect more cross-pollination.

The new Mac lineup unveiled Tuesday will be in stores five months after Apple announced it would abandon its longtime partner Intel in favor of using its own processors for Mac computers. Apple said its new Mac chips make possible faster processing speeds, sleeker designs and longer running times on a single battery charge.

For instance, some Macs have eliminated a cooling fan inside the machines, helping slim down their design.

The transition to the new in-house chips could also create stumbling blocks for Apple and other software makers aiming to adapt existing Mac software so it will also run smoothly on the new models.

Initially, Apple will only be putting its chips in smaller computers — the 13-inch MacBook Air and 13-inch MacBook Pro, as well as the Mac Mini desktop. The company expects it will take another two years before all its Macs are running on the in-house chips.

All three new computers are supposed to be available in stores next week, with prices starting at almost $700 for the Mac Mini to $1,200 for the 13-inch MacBook Pro.

The new Macs are debuting amid high demand for laptop computers as consumers, companies, schools and government agencies adjust to a work-at-home shift triggered by the COVID-19 pandemic. Even if a vaccine eases the threat posed by the novel coronavirus, people still are expected to be working more frequently from home than they previously did..

Apple’s Mac sales surged 17% during the first nine months of this year compared to 2019. The company’s iPhone revenue fell 9% over the same span as people continued to hold on to their older models for longer periods or bought devices powered by Google’s Android software instead.

Apple still get four times more revenue from iPhones than it does from Macs Sales of Macs also lag far behind those of PCs made by Lenovo, HP and Dell that run Microsoft’s Windows software and primarily use chips made by Intel and AMD.

False claims of voting fraud, pushed by Trump, thrive online

It started months before Election Day with false claims on Facebook and Twitter that mail-in ballots cast for President Donald Trump had been chucked in dumpsters or rivers.

Now, a week after the final polls closed, falsehoods about dead people voting and ballots being thrown out by poll workers are still thriving on social media, reaching an audience of millions. Trump and his supporters are pointing to those debunked claims on social media as reason to not accept that Democrat Joe Biden won the election.

“These will probably persist for years or even decades unfortunately,” Kate Starbird, a University of Washington professor and online misinformation expert, said of the false claims about the U.S. election process. “People are very motivated to both participate in them and believe them.”

There is no evidence of widespread fraud in the 2020 election. In fact, voting officials from both political parties have stated publicly that the election went well and international observers confirmed there were no serious irregularities.

The issues raised by Trump’s campaign and his allies are typical in every election: problems with signatures, secrecy envelopes and postal marks on mail-in ballots, as well as the potential for a small number of ballots miscast or lost. With Biden leading Trump by substantial margins in key battleground states, none of those issues would have any impact on the outcome of the election. Many of the legal challenges brought by Trump’s campaign have been tossed out by judges, some within hours of their filing.

But Trump, who primed his supporters for months to doubt this election’s outcome with false tales of ballots being “dumped in rivers” and baseless tweets warning of a “rigged election,” has continued his assault on the U.S. vote in more than 40 Facebook and Twitter posts since Election Day.

“This was a stolen election,” Trump tweeted on Sunday, the day after Biden became president-elect.

Trump’s supporters have readily echoed the president’s cries of an unfair election on Facebook and Twitter.

Tweets and retweets with terms such as “steal,” “fraud,” “rigged” and “dead” referring to the election spiked more than 2,800% from Nov. 2 to Nov. 6, according to an analysis by VineSight, a tech company that tracks misinformation. The company found more than 1.6 million retweets containing some of those words on Nov. 6 alone.

The false claims have shapeshifted over the last week, ranging from misleading assertions that ballots filled out with Sharpie pens in Illinois, Arizona and Michigan were thrown out to an inaccurate social media post from Eric Trump that the number of ballots cast in Wisconsin exceeded the number of registered voters.

In recent days, prominent Republicans and Trump allies have peddled social media claims that hundreds or thousands of dead people voted in key battleground states like Pennsylvania or Michigan.

One tweet, shared more than 50,000 times, falsely claimed that a dead woman named Donna Brydges voted in the election. Brydges is very much alive, she confirmed to the AP by phone last week. In fact, she had “just beat me in a game of Cribbage,” her husband told a reporter.

Between Election Day and Monday, roughly 5 million mentions of voter fraud and “Stop the Steal” were made across social media and online news sites, with most of the claims focusing on closely contested states like Pennsylvania, Georgia and Michigan, according to an analysis by media intelligence firm Zignal Labs. Mentions of voter fraud have not waned since final votes were cast Tuesday.

Last week, as Biden pulled ahead in the race, Trump supporters quickly launched dozens of “Stop the Steal” groups on Facebook and began using the platform to organize “Stop the Steal” rallies.

The social media platforms have tried to rein in the false claims.

Facebook quickly shut down one “Stop the Steal” Facebook group, which ballooned to more than 350,000 members in just one day, after some called for violence in it, and has taken down additional “Stop the Steal” groups. And over the last week, Twitter has covered nearly a dozen of the president’s tweets that make false or unproven claims that voter fraud occurred.

That’s pushed a small but vocal faction of conservatives to lesser-known social media sites like Parler, which does not moderate content posted by users as closely as mainstream tech companies like Facebook, YouTube or Twitter.

Parler has fewer than 8 million users, but its reach is quickly growing. As of Tuesday, Parler was the most downloaded app in Apple’s store, followed by MeWe. Newsmax, a conservative cable network, was in fourth place. According to Sensor Tower, which tracks such data, Parler was downloaded over 2 million times on Apple and Android in the U.S. from Nov. 3 to Nov. 9. This is more than 31 times the downloads it saw in the prior week. In that same period, Newsmax saw 583,000 installs, up more than 11 times from the previous week. MeWe hit 218,000 installs, more than 14 times the 15,000 installs it saw in the previous week.

The conversation on Parler has centered around voter fraud and an election stolen from Trump over the last week.

“Show the world we won’t let communist steal the White House,” one Parler user wrote Tuesday, in one of many thousands of posts using the hashtag #StopTheSteal.

That migration of social media users could be one drawback of the good job Facebook and Twitter did fact-checking and removing false content around the U.S. election, Starbird, the misinformation expert, noted.

“What they tried to do is commendable, which is why people are moving to other platforms,” Starbird said.

Tesla 'full self-driving' vehicles can't drive themselves

DETROIT (AP) — Earlier this week, Tesla sent out its “full self-driving” software to a small group of owners who will test it on public roads. But buried on its website is a disclaimer that the $8,000 system doesn’t make the vehicles autonomous and drivers still have to supervise it.

The conflicting messages have experts in the field accusing Tesla of deceptive, irresponsible marketing that could make the roads more dangerous as the system is rolled out to as many as 1 million electric vehicle drivers by the end of the year.

“This is actively misleading people about the capabilities of the system, based on the information I’ve seen about it,” said Steven Shladover, a research engineer at the University of California, Berkeley, who has studied autonomous driving for 40 years. “It is a very limited functionality that still requires constant driver supervision.”

On a conference call Wednesday, Musk told industry analysts that the company is starting full self-driving slowly and cautiously “because the world is a complex and messy place.” It plans to add drivers this weekend and hopes to have a wider release by the end of the year. He referred to having a million vehicles “providing feedback” on situations that can’t be anticipated.

The company hasn’t identified the drivers or said where they are located. Messages were left Thursday seeking comment from Tesla.

The National Highway Traffic Safety Administration, which regulates automakers, says it will monitor the Teslas closely “and will not hesitate to take action to protect the public against unreasonable risks to safety.”

The agency says in a statement that it has been briefed on Tesla’s system, which it considers to be an expansion of driver assistance software, which requires human supervision.

“No vehicle available for purchase today is capable of driving itself,” the statement said.

On its website, Tesla touts in large font its full self-driving capability. In smaller font, it warns: “The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions.”

Even before using the term “full self-driving,” Tesla named its driver-assist system “Autopilot.” Many drivers relied on it too much and checked out, resulting in at least three U.S. deaths. The National Transportation Safety Board faulted Tesla in those fatal crashes for letting drivers avoid paying attention and failing to limit where Autopilot can be used.

Board members, who have no regulatory powers, have said they are frustrated that safety recommendations have been ignored by Tesla and NHTSA.

Bryant Walker Smith, a University of South Carolina law professor who studies autonomous vehicles, said it was bad enough that Tesla was using the term “Autopilot” to describe its system but elevating it to “full self-driving” is even worse.

“That leaves the domain of the misleading and irresponsible to something that could be called fraudulent,” Walker Smith said.

The Society of Automotive Engineers, or SAE, has developed five levels to describe the functions of autonomous vehicles. In levels zero through two, humans are driving the cars and supervising partially automated functions. In levels three through five, the vehicles are driving, with level five describing a vehicle being driven under all traffic and weather conditions.

The term “full self-driving” means there is no driver other than the vehicle itself, indicating that it would be appropriate to put no one in the vehicle, Walker Smith said.

Musk also said on Wednesday that Tesla would focus on setting up a robotaxi system where one person could manage a fleet of 10 self-driving cars in a ride hailing system.

“It wouldn’t be very difficult, but we’re going to just be focused on just having an autonomous network that has sort of elements of Uber, Lyft, Airbnb,” he said.

Tesla is among 60 companies with permits to operate autonomous vehicles with human backup drivers in California, the No. 1 state for Tesla sales. The companies are required to file reports with regulators documenting when the robotic system experiences a problem that requires the driver to take control – a mandate that could entangle the owners of Tesla vehicles in red tape.

Before Tesla is able to put fully self-driving vehicles on California roads, it will have to get another permit from state regulators. Only five companies, including Google spin-off Waymo and General Motors’ Cruise subsidiary, have obtained those permits.

The California Department of Motor Vehicles didn’t immediately respond to questions about Tesla’s latest plans for robotic cars.

NHTSA, which has shied away from imposing regulations for fear of stifling safety innovation, says that every state holds drivers accountable for the safe operation of their vehicles.

Walker Smith argues that the agency is placing too much of the responsibility on Tesla drivers when it should be asking what automakers are going to do to make sure the vehicles are safe. At the same time, he says that testing the system with vehicle drivers could be beneficial and speed adoption of autonomous vehicles.

Thursday afternoon, Musk was clearly trying to sell the full self-driving software. He wrote on Twitter that the price of “FSD beta” will rise by $2,000 on Monday.

YouTube follows Twitter and Facebook with QAnon crackdown

OAKLAND, Calif. (AP) — YouTube is following the lead of Twitter and Facebook, saying that it is taking more steps to limit QAnon and other baseless conspiracy theories that can lead to real-world violence.

The Google-owned video platform said Thursday it will now prohibit material targeting a person or group with conspiracy theories that have been used to justify violence.

One example would be videos that threaten or harass someone by suggesting they are complicit in a conspiracy such as QAnon, which paints President Donald Trump as a secret warrior against a supposed child-trafficking ring run by celebrities and “deep state” government officials.

Pizzagate is another internet conspiracy theory — essentially a predecessor to QAnon — that would fall in the banned category. Its promoters claimed children were being harmed at a pizza restaurant in Washington. D.C. A man who believed in the conspiracy entered the restaurant in December 2016 and fired an assault rifle. He was sentenced to prison in 2017.

YouTube is the third of the major social platforms to announce policies intended rein in QAnon, a conspiracy theory they all helped spread.

Twitter announced in July a crackdown on QAnon, though it did not ban its supporters from its platform. It did ban thousands of accounts associated with QAnon content and blocked URLs associated with it from being shared. Twitter also said that it would stop highlighting and recommending tweets associated with QAnon.

Facebook, meanwhile, announced last week that it was banning groups that openly support QAnon. It said it would remove pages, groups and Instagram accounts for representing QAnon — even if they don’t promote violence.

The social network said it will consider a variety of factors in deciding whether a group meets its criteria for a ban. Those include the group’s name, its biography or “about” section, and discussions within the page or group on Facebook, or account on Instagram, which is owned by Facebook.

Facebook’s move came two months after it announced softer crackdown, saying said it would stop promoting the group and its adherents. But that effort faltered due to spotty enforcement.

YouTube said it had already removed tens of thousands of QAnon-videos and eliminated hundreds of channels under its existing policies — especially those that explicitly threaten violence or deny the existence of major violent events.

“All of this work has been pivotal in curbing the reach of harmful conspiracies, but there’s even more we can do to address certain conspiracy theories that are used to justify real-world violence, like QAnon,” the company said in Thursday’s blog post.

Experts said the move shows that YouTube is taking threats around violent conspiracy theories seriously and recognizes the importance of limiting the spread of such conspiracies. But, with QAnon increasingly creeping into mainstream politics and U.S. life, they wonder if it is too late.

“While this is an important change, for almost three years YouTube was a primary site for the spread of QAnon,” said Sophie Bjork-James, an anthropologist at Vanderbilt University who studies QAnon. “Without the platform Q would likely remain an obscure conspiracy. For years YouTube provided this radical group an international audience.”