HN Gopher Feed (2017-08-18) - page 1 of 10 ___________________________________________________________________
YouTube admits 'wrong call' over deletion of Syrian war crime
videos
153 points by jacobr
http://www.middleeasteye.net/news/youtube-admits-wrong-call-over...___________________________________________________________________
RandVal30142 - 1 hours ago
Something people need to keep in mind when parsing this story is
that many of the effected channels were not about militancy, they
were local media outlets. Local outlets that only gained historical
note due to what they documented as it was unfolding.In Syria
outlets like Sham News Network have posted thousands upon thousands
of clips. Everything from stories on civilian infrastructure under
war, spots on mental health, live broadcasts of
demonstrations.Everything.Including documenting attacks as they
happen and after they have happened. Some of the effected accounts
were ones that documented the regime's early chemical weapons
attacks. These videos are literally cited in investigations.All
that is needed to get thousands upon thousands of hours of
documentation going back half a decade deleted is three
strikes.Liveleak is not a good host for such outlets because it is
not what these media outlets are about. Liveleak themselves delete
content as well so even if the outlets fit the community it would
not be a 'fix.'
ezoe - 1 hours ago
What I don't like about those web giant services is, to get a human
support, it requires to start social pressure like this.If they
fucked up something by automation, contacting to human support is
hopeless unless you have very influential SNS status or something.
itaris - 3 hours ago
I'm much a proponent of automation as anyone else. But I think
right now Google is trying to do something way too hard. By looking
for "extremist" material, they are basically trying to determine
the intention of a video. How can you expect an AI to do that?
Dirlewanger - 1 hours ago
Doesn't matter, it's already out in the wild. This year so far,
tons of channels whose videos have had ads for years are being
instantly demonetized without explanation. If even one word from
a video's title, one tag is on their "controversial" shitlist,
it's SOL. Knowing YouTube's track record with this stuff, they
will continue to be silent and not give a shit.Content creators
that don't produce content for 5 year-olds need to start looking
somewhere else than YouTube.
notananthem - 1 hours ago
I mean its also youtube so .. who cares
wyager - 1 hours ago
Yes, who cares about the world's (by far) largest and most
popular video distribution site?
Fej - 1 hours ago
YouTube is the only option. Smart businesspeople (i.e. most
channels over 500k or even 200k subs) have alternate sources of
revenue, mainly sponsorships but also Patreon.
fao_ - 1 hours ago
It's almost like you forgot Vimeo
Fej - 17 minutes ago
I didn't forget Vimeo. YouTube is so dominant that, since
everyone else is on YouTube, uploading videos elsewhere is
career suicide. No one will switch websites or apps to
watch just one creator's content.One of the keys to
YouTube's success is its sub feed. All the videos from all
my favorite channels, all in one place. It's extremely
convenient. People tend to take the path of least
resistance.
tomc1985 - 45 minutes ago
Oh god please no
Dirlewanger - 1 hours ago
But Vimeo isn't, and doesn't want to be, YouTube.
anotherbrownguy - 29 minutes ago
The demonetization effort is too targeted and obvious to be
explained by "AI did it". If AI did it, they could undo it after
appeal which they don't.
sbov - 3 hours ago
The AI doesn't seem to remove them though, it just flags them for
human review. In theory, these humans should be the ones
determining intent.
duskwuff - 3 hours ago
And in this case, there should probably be a separately trained
group of reviewers to carefully examine these videos. Not the
same group that's quickly checking over videos to see if
they're pornographic, for instance.
shallot_router - 3 hours ago
How do you know there isn't already a separately trained
group of reviews whose only role is to carefully examine
videos related to war crimes/terrorism/violence? I suspect
there was, and Google/YouTube's senior management just
decided to take a harder line on it than they should've.
mc32 - 3 hours ago
I think most of their review ops are in Manila, PH, no? It'd
take some time to get them up to speed on that...
vacri - 1 hours ago
O_o
jtmcmc - 1 hours ago
the highest level are in the US
TheSpiceIsLife - 1 hours ago
Why do you believe that to be the case?
ben_w - 3 hours ago
That's certainly a first step, but I doubt it's a full
solution. What's the phrase, "dog whistles" for phrases and
keywords that only a target audience would understand?
Raphmedia - 3 hours ago
But is Youtube really the right platform for such videos? It's a
platform made to host videos in order to put advertiser's ads on
them. When I think "raw war videos", I think of Liveleak.
pryelluw - 3 hours ago
YouTube is now what TV used to be. So yes, you should be able
to show this content. However, the platform needs to provide
betyer tools to users and producers to aide in categorizing
content.
tree_of_item - 2 hours ago
When did TV ever show content like this?
PhasmaFelis - 1 hours ago
I remember graphic photos of Iraqi war crime victims on the
evening news in '91.
devrandomguy - 3 hours ago
For many users, Youtube is the only video publishing site, and
many people get their news through YT. Very few people are
aware of the existence of LiveLeak; it is an ineffective
platform for spreading awareness.
[deleted]
tekromancr - 2 hours ago
Agreed. I don't click liveleak links if I don't feel like
watching someone get murdered on camera. So, basically, I
don't click liveleak links.
aaron-lebo - 2 hours ago
Would you prefer to stumble across those on Youtube?It
seems better that material which needs to be kept for
history and remained uncensored due to advertising is on a
site dedicated to that instead of a site which most people
use for videos of recipes, memes, and great football
headers.
devrandomguy - 2 hours ago
Yeah, there does need to be some sort of inhibition
against highly traumatizing content, it should certainly
not be promoted to people who do not seek it out. But
purging news videos of statues being destroyed and
ancient buildings being demolished, is going too far.
megous - 2 hours ago
This is a solved problem. Flag it and it will get behind
a confirmation screen asking you if you're really sure to
view the video.
[deleted]
nsxwolf - 17 minutes ago
Basically every YouTuber I follow has complained about having
videos demonetized this week. Subjects ranging from video game
reviews to body dysmorphic disorder.It really seems they've
bitten off more than their machine learning algorithms can chew
here.
jmcdiesel - 2 hours ago
I dont think the problem is automation...It's people's
expectation for it to be perfect, and the egoic drive to blame
someone when something goes wrong. There was no reason for the
hype around this story... an AI determinator had a false
positive. Thats not google attacking the videos, thats a
technical issue and it needs to have zero feelings involved
because the entire process happened in a damned computer
incapable of feelings...But everyone needs to feed their outrage
porn addiction...
megous - 2 hours ago
Read the article. It says that humans made the final decisions.
[deleted]
colordrops - 2 hours ago
It's not a technical issue. Software is not yet capable of
accurate content detection, and even if it were, it's not clear
whether this sort of thing should be automated. It's not like
google can just change a few lines of code and the problem is
gone.
jmcdiesel - 2 hours ago
The point is, there will be false positives, there is no
reason to get upset and hurt over them...There is no perfect
system. If its automated, there will be false positives (and
negatives), if there is a human involved, you have a clear
bias issue, if there is a group of humans involved, you have
societal bias to deal with...There is no perfect system for
something like this, to the best answer is to use something
like this, that gets it right most of the time... then clean
up when it makes a mistake. And you shouldn't have to
apologize for the false positive, people need to put on their
big boy pants and stop pretending to be the victim when there
is no victim to begin with...
saurik - 2 hours ago
This is the exact same argument for "stop and frisk", and
that is just totally NOT OK.
tree_of_item - 2 hours ago
It's not the exact same argument because stop and frisk
is not automated.
saurik - 29 minutes ago
If isn't the same process being defended, but I clearlt
didn't claim that: the argument used to defend the
different processes, however, is the same. This "put on
your big boy pants" bullshit is saying that people should
accept any incidental harassment because false positives
are to be tolerated and no system is perfect, so we may
as well just use this one. If the false positives of a
system discriminate against a subset of people--as
absolutely happens with these filters, which end up
blocking people from talking about the daily harassment
they experience or even using the names of events they
are attending without automated processes flagging their
posts--then that is NOT
OK.https://www.washingtonpost.com/business/economy/for-
facebook...https://www.lgbtqnation.com/2017/07/facebook-
censoring-lesbi...
jmcdiesel - 1 hours ago
Thats exactly the OPPOSITE of stop and frisk.1) Stop and
frisk is BIASED heavily on race becuase its a HUMAN
making the choice...2) Stop and Frisk is the GOVERNMENT,
and therefore actually pushes up against the
constitution.How do you see these things are remotely the
same?
saurik - 32 minutes ago
The false positives are not random: they target
minorities; these automated algorithms designed to filter
hate have also been filtering people trying to talk about
the hate they experience on a daily basis. They keep
people from even talking about events they are attending,
such as Dykes on Bikes. It is NOT OK to tell these people
to "put on their big boy pants" and put up with their
daily dose of bullshit from the establishment.https://www
.washingtonpost.com/business/economy/for-
facebook...https://www.lgbtqnation.com/2017/07/facebook-
censoring-lesbi...
[deleted]
ajross - 2 hours ago
> It's not a technical issue. Software is not yet capable of
accurate content detection,Your second sentence is a
technical argument, which makes your first a lie. Obviously
Google disagreed, which is why they put this system into
place. And if they were wrong about that they were wrong for
technical reasons, not moral ones.I mean, you can say there's
a policy argument about accuracy vs. "justice" or whatever.
It's a legitimate argument, and you can fault Google for a
mistake here. But given that this was an automated system
it's disingenuous to try to make more of this than is
appropriate.
colordrops - 2 hours ago
If you just stare at the words and ignore my meaning, sure.
But saying this is a technical problem is like saying that
climate change is a technical problem because we haven't
got fusion reactors working yet.
ajross - 2 hours ago
Then I don't understand what your words mean. Climate
change is a technical problem and policy solutions are
technical.My assumption was that you were contrasting
"technical" problems (whether or not Google was able to
do this analysis in an automated way) with "moral" ones
(Google was evil to have tried this). If that's not what
you mean, can you spell it out more clearly?
lovich - 1 hours ago
Is there any problem you wouldn't frame as technical
then? If the software isn't anywhere close to capable
enough to do this task and YouTube decides to use it
anyway that is a management problem. Otherwise literally
every problem is technical and we just don't have the
software to fix it yet
ajross - 29 minutes ago
Sure: "Should Google be involved in censoring extremist
content?". There's a moral question on exactly this
issue. And the answer doesn't depend on whether it's
possible for Google to do it or not.What you guys and
your downvotes are doing is trying to avoid making an
argument on the moral issue directly (which is hard) and
just taking potshots at Google for their technical
failure as if it also constitutes a moral failure. And
that's not fair.If they shouldn't be doing this they
shouldn't be doing this. Make that argument.
colordrops - 1 hours ago
If you believe climate change is a technical problem then
there isn't much point continuing this discussion. Using
that logic you could claim that any problem is technical
because everything is driven by the laws of physics.
PhasmaFelis - 1 hours ago
Your whole premise is wrong, because the final decisions were
made by humans. But even if they weren't, you're still
mistaken. If you write a program to do an important task, it is
your responsibility to see that it's both tested and supervised
to make sure it does it properly. Google wasn't malicious here,
but it was dangerously irresponsible.
762236 - 3 hours ago
Automation is the only real solution. These types of conversations
seem to always overlook how normal people don't want to watch such
videos. Do you want to spend your day watching this stuff to grade
them?
vacri - 1 hours ago
People also don't want to spend their day scrubbing toilets, but
there are plenty of janitors out there.
EpicEng - 2 hours ago
Yet youtube is admitting that the videos should not have been
pulled, and there's no AI in the world that could have made the
right call here. So... what sort of automation are you
suggesting? It seems as though the real solution is the exact
opposite of what you're proposing; human review by better trained
personnel with clearly defined criteria.
762236 - 24 minutes ago
This is irresponsible to the trained personnel. We have job
safety requirements for people working on assembly lines. We
should also have psychological safety for people, and there are
lots of stories of employees that are paid to view the toxic
videos suffering (even developing PTSD).
em3rgent0rdr - 2 hours ago
"Do you want to spend your day watching this stuff to grade
them?"What about providing more tools for community to categorize
disturbing videos other than simply "flag".
ben_w - 2 hours ago
I will literally never choose to watch a disturbing but real
event. War crimes, either in the form of documenting them or
promoting them, will be flagged as "people like Ben never click
on previews of this video, don't waste time suggesting it to
them in future". I won't even get as far as the page the flag
button is on, never mind any other options.
paganel - 1 hours ago
> These types of conversations seem to always overlook how
normal people don't want to watch such videosThat makes me not
normal, I guess. Plus, the NSFL videos were almost instantly
taken off YT immediately after having been uploaded, what had
remained was really interesting stuff documenting the war in
Syria (or at least interesting for people such as myself, a guy
interested in wars and conflicts in general).
hnaccy - 34 minutes ago
So hire abnormal people?I'm pretty sure there's a healthy chunk
of population who would be unfazed and would love a cushy job
judging videos.
norea-armozel - 3 hours ago
I think YouTube really needs to hire more humans to review flagging
of videos rather than leave it to a loose set of algorithms and
swarming behavior of viewers. They assume wrongly that anyone who
flags a video is honest. They should always assume the opposite and
err on the side of caution. And this should also apply to any
Content ID flagging. It should be the obligation of accusers to
present evidence before taking content down.
schoen - 3 hours ago
> They assume wrongly that anyone who flags a video is honest.I
don't think they assume that at all. If they did, you'd see at
least an order of magnitude more videos removed.I agree with the
sentiment of your criticism, but I think we could phrase it more
in terms of prior probabilities or something about the false
positive and false negative rate in their review process.
Flagging of videos is extremely common and even a small amount of
unreliability in the review process translates into a huge number
of mistakes.Also, users of the site don't actually agree with
each other much at all about which removals were in error; we
could say that there's absolutely abysmal inter-rater reliability
if the end-users of the site are the "raters" of the quality of
content removal decisions.Also, most people who flag things don't
necessarily know much at all about YouTube's terms of service or
how YouTube has interpreted or applied them in the past, so it's
hard to be clear on what it means for flaggers to be honest or
dishonest. Probably the most common meaning of flagging is "ugh,
I'm upset that this video is up on YouTube".
norea-armozel - 3 hours ago
The biggest problem, imo, isn't the random flagger but rather
the concerted actions of groups to flag videos. This is obvious
in terms of reddit or 4chan users swarming a channel they don't
like. This kind of behavior needs to be mitigated in some way.
I think a quick solution would be to force a cool down timer on
flagging of 24-48 hours for all users to ensure they're not
abusing the system. That should include random users who file
DCMA takedowns that aren't partnered with Youtube in some way.
ue_ - 2 hours ago
Very much agreed on this swarm behaviour. A political channel
made by a very kind person with nice intentions didn't seem
to breaking any rules at all, though the /pol/ board on 8chan
coordinated mass-flagging attacks against his videos twice
which resulted in his channel being deleted twice.
osteele - 1 hours ago
HN discussion of deletion event:
https://news.ycombinator.com/item?id=14998429
alexandercrohde - 1 hours ago
I think youtube needs to consider backing off regulating political
content.The fact is politics and morality are inherently
intermingled. One can use words like extremist, but sometimes the
extremists are the "correct" ones (like our founding fathers who
orchestrated a revolution). How could any system consistently
categorize "appropriate" videos without making moral judgements?
secfirstmd - 31 minutes ago
Agree but it's far easier for politicians (May, Rudd, Trump,
Turnball, Putin, Netanyahu, Xi Jinping, Mugabe, Zuma, Chan-ocha,
el-Sisi, Erdo?an, Khamenei, Maduro - the list is endless) to
blame videos on Youtube for radicalising people then it is to
tackle the long running political, historical and socio-economic
grievances that fuel the fire.
anotherbrownguy - 17 minutes ago
Do you think any rich established British company supported the
American revolution? Why do you expect Google to help anything
new or different regardless of how "correct" it is? Big
established players can't afford revolutions, too risky for them.
balozi - 1 hours ago
Well, the AI did such a bang?up job sorting out the mess in comment
section that it got promoted to sorting out the videos themselves.
charlesism - 19 minutes ago
5 minutes reading YouTube comments is enough to make me ill. I
don't know what a couple hours a day after school would do to a
person after ten years. We'll all find out, I guess, once this
generation of kids reaches adulthood.
pgnas - 14 minutes ago
YouTube (google) has become the EXACT opposite of what they said
they were not going to do.They are evil.
jimmy2020 - 58 minutes ago
i really don't know how to describe my feeling as a syrian when i
know the most important evidence that witnessed the regime crimes
were deleted because of wrong call. And it's really confusing how
artificial algorithm get confused between what is is obvious as
isis propaganda and a family buried under the rubble and this
statement makes things even worse. mistakenly? because there is so
many videos? just imagine that may happen to any celebs channel.
Will youtube issue the same statement? dont think so.
tdurden - 38 minutes ago
Google/YouTube needs to admit defeat in this area and stop trying
to censor, they are doing more harm than good.