Josh Hart’s proficiency from 3-point range has significantly slipped this season, and the Knicks’ spark plug admitted Monday that he believes part of it is...
A 12-year-old Michigan boy went for a joyride in a stolen forklift and dodged police and deputies before eventually being arrested Saturday evening, according to...
Ryan, speaking on ESPN's "Get Up" on Monday after the Patriots lost to the Giants, was tired of the act. Read More
A 31-year-old man was shot and killed in West Baltimore’s Hanlon Longwood neighborhood last week. Read More
Google is expected to soon start getting rid of some inactive accounts.
The company, owned by Alphabet, is slated to start its phased purge of personal Google Accounts whose owners haven’t signed into or used them in the past two years as soon as Friday, according to a May blog post and a webpage about its inactivity policy.
Google previously indicated accounts that haven’t seen use since their set-up would face the chopping block initially.
Accounts obtained through businesses, schools and other organizations won’t be subject to the potential deletion. For the inactive personal accounts, both the account and its contents “within Google Workspace (Gmail, Docs, Drive, Meet, Calendar) and Google Photos” could get erased, the company previously said.
Users of accounts that could get eliminated will have been informed in multiple instances well ahead of time, with additional warning sent to their applicable recovery emails.
On top of signing in to an account, Google has said reading an email, creating a Google Doc, watching a YouTube video and conducting Google searches are among some of the things that qualify as activity. Having a subscription to a news outlet or an app linked to a Google Account does as well.
The potential upcoming deletions stem from an update to its inactivity policy that it implemented in mid-May to boost security efforts. It said at the time that inactive accounts “often rely on old or re-used passwords that may have been compromised, haven’t had two factor authentication set up, and receive fewer security checks by the user.”
Google accounts provide access to various services offered by the tech giant. Gmail, a popular service that requires one, has been around for nearly 20 years.
The company itself reached its 25-year anniversary in late September.
Instagram’s Reels video service is designed to show users streams of short videos on topics the system decides will interest them, such as sports, fashion or humor.
The Meta Platforms-owned social app does the same thing for users its algorithm decides might have a prurient interest in children, testing by The Wall Street Journal showed.
The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl.
The Canadian Centre for Child Protection, a child-protection group, separately ran similar tests on its own, with similar results.
Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see. The company declined to comment on why the algorithms compiled streams of separate videos showing children, sex and advertisements, but a spokesman said that in October it introduced new brand safety tools that give advertisers greater control over where their ads appear, and that Instagram either removes or reduces the prominence of four million videos suspected of violating its standards each month.
The Journal reported in June that algorithms run by Meta, which owns both Facebook and Instagram, connect large communities of users interested in pedophilic content. The Meta spokesman said a task force set up after the Journal’s article has expanded its automated systems for detecting users who behave suspiciously, taking down tens of thousands of such accounts each month. The company also is participating in a new industry coalition to share signs of potential child exploitation.
Companies whose ads appeared beside inappropriate content in the Journal’s tests include Disney, Walmart, online dating company Match Group, Hims, which sells erectile-dysfunction drugs, and The Wall Street Journal itself. Most brand-name retailers require that their advertising not run next to sexual or explicit content.
“Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, a Meta vice president who handles relations with the advertising industry. She said the prevalence of inappropriate content on Instagram is low, and that the company invests heavily in reducing it.
After the Journal contacted companies whose ads appeared in the testing next to inappropriate videos, several said that Meta told them it was investigating and would pay for brand-safety audits from an outside firm.
Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.
Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.
Charlie Cain, Disney’s vice president of brand management, said the company has set strict limits on what social media content is acceptable for advertising and has pressed Meta and other platforms to improve brand-safety features. A company spokeswoman said that since the Journal presented its findings to Disney, the company had been working on addressing the issue at the “highest levels at Meta.”
Walmart declined to comment, and Pizza Hut didn’t respond to requests for comment.
Hims said it would press Meta to prevent such ad placement, and that it considered Meta’s pledge to work on the problem encouraging.
The Journal said that it was alarmed that its ad appeared next to a video of an apparent adult sex act and that it would demand action from Meta.
Meta created Reels to compete with TikTok, the video-sharing platform owned by Beijing-based ByteDance. Both products feed users a nonstop succession of videos posted by others, and make money by inserting ads among them. Both companies’ algorithms show to a user videos the platforms calculate are most likely to keep that user engaged, based on his or her past viewing behavior.
The Journal reporters set up the Instagram test accounts as adults on newly purchased devices and followed the gymnasts, cheerleaders and other young influencers. The tests showed that following only the young girls triggered Instagram to begin serving videos from accounts promoting adult sex content alongside ads for major consumer brands, such as one for Walmart that ran after a video of a woman exposing her crotch.
When the test accounts then followed some users who followed those same young people’s accounts, they yielded even more disturbing recommendations. The platform served a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.
Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
“Niche content provides a much stronger signal than general interest content,” said Jonathan Stray, senior scientist for the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.
Current and former Meta employees said in interviews that the tendency of Instagram algorithms to aggregate child sexualization content from across its platform was known internally to be a problem. Once Instagram pigeonholes a user as interested in any particular subject matter, they said, its recommendation systems are trained to push more related content to them.
Preventing the system from pushing noxious content to users interested in it, they said, requires significant changes to the recommendation algorithms that also drive engagement for normal users. Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.
The test accounts showed that advertisements were regularly added to the problematic Reels streams. Ads encouraging users to visit Disneyland for the holidays ran next to a video of an adult acting out having sex with her father, and another of a young woman in lingerie with fake blood dripping from her mouth. An ad for Hims ran shortly after a video depicting an apparently anguished woman in a sexual situation along with a link to what was described as “the full video.”
Even before the 2020 launch of Reels, Meta employees understood that the product posed safety concerns, according to former employees.
Part of the problem is that automated enforcement systems have a harder time parsing video content than text or still images. Another difficulty arises from how Reels works: Rather than showing content shared by users’ friends, the way other parts of Instagram and Facebook often do, Reels promotes videos from sources they don’t follow.
In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: “Either we ramp up our content detection capabilities, or we don’t recommend any minor content,” meaning any videos of children.
At the time, TikTok was growing rapidly, drawing the attention of Instagram’s young users and the advertisers targeting them. Meta didn’t adopt either of the safety analysis’s recommendations at that time, according to J.
Stetson, Meta’s liaison with digital-ad buyers, disputed that Meta had neglected child safety concerns ahead of the product’s launch. “We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures,” she said.
Video-sharing platforms appeal to social-media companies because videos tend to hold user attention longer than text or still photos do, making them attractive for advertisers.
After initially struggling to maximize the revenue potential of its Reels product, Meta has improved how its algorithms recommend content and personalize video streams for users.
Social-media platforms and digital advertising agencies often describe inappropriate ad placements as unfortunate mistakes. But the test accounts run by the Journal and the Canadian Centre for Child Protection suggest Meta’s platforms appeared to target some digital marketing at users interested in sex.
Among the ads that appeared regularly in the Journal’s test accounts were those for “dating” apps and livestreaming platforms featuring adult nudity, massage parlors offering “happy endings” and artificial-intelligence chatbots built for cybersex. Meta’s rules are supposed to prohibit such ads.
The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere.
As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as “adults and children doing sexual posing.”
After the Journal began contacting advertisers about the placements, and those companies raised questions, Meta told them it was investigating the matter and would pay for brand-safety auditing services to determine how often a company’s ads appear beside content it considers unacceptable.
Meta hasn’t offered a timetable for resolving the problem or explained how in the future it would restrict the promotion of inappropriate content featuring children.
The Journal’s test accounts found that the problem even affected Meta-related brands. Ads for the company’s WhatsApp encrypted chat service and Meta’s Ray-Ban Stories glasses appeared next to adult pornography. An ad for Lean In Girls, the young women’s empowerment nonprofit run by former Meta Chief Operating Officer Sheryl Sandberg, ran directly before a promotion for an adult sex-content creator who often appears in schoolgirl attire. Sandberg declined to comment.
Through its own tests, the Canadian Centre for Child Protection concluded that Instagram was regularly serving videos and pictures of clothed children who also appear in the National Center for Missing and Exploited Children’s digital database of images and videos confirmed to be child abuse sexual material. The group said child abusers often use the images of the girls to advertise illegal content for sale in dark-web forums.
The nature of the content—sexualizing children without generally showing nudity—reflects the way that social media has changed online child sexual abuse, said Lianna McDonald, executive director for the Canadian center. The group has raised concerns about the ability of Meta’s algorithms to essentially recruit new members of online communities devoted to child sexual abuse, where links to illicit content in more private forums proliferate.
“Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities,” McDonald said, calling it disturbing that ads from major companies were subsidizing that process.
Hall & Oates singer Daryl Hall’s temporary restraining order blocked bandmate John Oates from completing the sale of his stake in the group’s business, FOX Business can confirm.
After attempting to begin arbitration on Nov. 9, Hall resorted to filing a separate lawsuit and requested a temporary restraining order to keep Oates from selling his share in Whole Oats Enterprises. The business venture is managed by both musicians.
The temporary restraining order, which was granted on Nov. 17, prohibits Oates from selling his share to Primary Wave Music before an arbitrator weighs in or until the temporary restraining order expires, according to court documents obtained by FOX Business.
The initial complaint, along with other court documents pertaining to the case, remain under seal. Writing in favor of sealing certain filings, Hall’s attorneys reasoned that it’s a private dispute under an agreement with confidential terms, concerning a confidential arbitration process.
Representatives for neither Hall nor Oates responded to FOX Business’ immediate request for comment.
The legal battle began on Nov. 16 when Hall sued Oates in Nashville, Tennessee, according to the court docket. The lawsuit is sealed by court order, but is listed under the category of contract/debt. The temporary restraining order was granted Nov. 24.
The two men formed the pop-rock band Hall & Oates in the 1970s, and while they’ve never officially broken up, both Hall and Oates have carried on solo careers for years.
Hall made it clear there was a separation between him and Oates in a 2022 interview.
“You think John Oates is my partner?… He’s my business partner,” Hall said during an appearance on Bill Maher’s “Club Random” podcast. “He’s not my creative partner.”
“John and I are brothers, but we are not creative brothers. We are business partners. We made records called ‘Hall & Oates’ together, but we’ve always been very separate, and that’s a really important thing for me.”
Hall used the example of the band’s song “Kiss on My List” to prove how separate the two were. While Oates is listed as a co-producer on the track, he is not listed as a songwriter.
“I did all those [harmonies],” Hall said. “That’s all me.”
Hall & Oates released their debut album “Whole Oats” in 1972. Since their start, they have put out 18 studio albums and scored six No. 1 singles. Some of Hall & Oates’ biggest hits include “Rich Girl,” “Maneater” and “You Make My Dreams.”
The two toured together as recently as October 2022, according to Variety.
The Associated Press contributed to this report.
A new Pennsylvania law will require doctors to get a patient's verbal and written consent before medical students can perform pelvic or rectal exams on...
A senior Biden administration official was caught on camera butchering a popular phrase coined by the late former President Ronald Reagan about the federal government's...
Roommates who sued a Maryland county Monday claim police officers illegally entered their apartment without a warrant, detained them at gunpoint without justification and unnecessarily...