Our commitments

How does YouTube keep harmful content off the platform?

We have Community Guidelines - policies that outline what content is not acceptable to post. We work to quickly remove content that violates our policies. We use a combination of people and machine learning to detect potentially problematic content at scale. Once such content is identified, human review verifies whether it violates our policies. If it does, the content is removed and used to train our machines for better coverage in the future.

Managing harmful content

How does YouTube detect policy violative content?

YouTube has automated systems that aid in the detection of content that may violate our policies. The YouTube community as well as experts through our Trusted Flagger program, also help us spot problematic content by reporting it.

What kind of content does YouTube remove?

YouTube has Community Guidelines - policies that outline content that is not allowed on the platform. We remove content that violates these policies when it is flagged to us, either by our users, or by our automated systems.

While our Community Guidelines are policies that apply wherever you are in the world, YouTube is available in more than 100 different countries - so we also comply with local law. If anyone from the YouTube community thinks content might be in violation of local law, they can report it using our online form.

Sometimes, there is content that comes close to - but doesn’t quite cross the line - of violating our Community Guidelines and is therefore not removed. However, we still limit the spread of such borderline content and harmful misinformation.