HN Gopher Feed (2017-10-13) - page 1 of 10 ___________________________________________________________________
Colorizing black and white photos with deep learning
155 points by saip
https://blog.floydhub.com/colorizing-b&w-photos-with-neural-netw...___________________________________________________________________
bootcat - 6 hours ago
Thank you, really good read and great product idea !!
whatrocks - 6 hours ago
I'd like to train this on color comic strips and then run something
traditionally black and white like xkcd through it. Seems like it
could make the colorization part of hand drawn animation much
easier.
web007 - 1 hours ago
You'll probably want something closer to a GAN like pix2pix -
https://phillipi.github.io/pix2pix/An example implementation
would look something like edges2cats
https://affinelayer.com/pixsrv/
quotemstr - 1 hours ago
The suggestion of using a classification network as a loss function
is brilliant!I love how we can, in general, elevate the
sophistication of ML models by having different models interact and
train each other.
vwcx - 5 hours ago
As a professional photo editor and historian, colorized photos
really agitate me. I'm all for the creation of new ways to get
people to engage with historical primary documentation, but the
nuance that these colorizations are interpretations gets lost
immediately.Do an image search for "D-Day in color" and try to tell
me which results are original color negatives and which are
colorizations made by teenagers.I'm also a little confused as to
why colorizations always aim to restore color to the equivalent of
a faded color negative, with muted tonality and grain. Human logic
is funny.
WalterBright - 3 hours ago
I like colorized historical photos. It brings them to life in a
remarkable way. I'd like to see the old B+W movies colorized (the
Ted Turner ones don't count, as they were done very poorly).Sure,
the colors will never be exact, because we don't know what the
original colors were. But that doesn't matter in any material
way.
WalterBright - 3 hours ago
> I'm also a little confused as to why colorizations always aim
to restore color to the equivalent of a faded color negative,
with muted tonality and grain.Many movies set in the past, or
that have flashbacks to the past, will often mute the colors.
Most modern movies muck around with the colors in post
production, too. The worst is when they go for the blue/orange
palette.
dahart - 1 hours ago
> the nuance that these colorizations are interpretations gets
lost immediately.True for all AI; neural networks are doing
amazing things, but the output is a synthesis, it's a complex
interpolation of it's training inputs that may seem "good" or
reliable, but it is never to be taken as truth or fact, and it
can be arbitrarily wrong with unbounded errors.> I'm also a
little confused as to why colorization always aim to restore
color to the equivalent of a faded color negative, with muted
tonality and grain. Human logic is funny.This isn't a human logic
problem. Normally colorizations don't affect tonality and grain
much, they are putting color splashes on top of a B/W image. This
is true of hand-painted colorization, as well as the digital
colorization here. You can't get rid of grain or adjust tone by
adding color.One can adjust tone and grain, but then you're doing
more than colorizing, and going even further down the road of
"interpretation" you're concerned about.In this particular case,
the author did mention "A more diverse dataset makes the pictures
brownish". Brown is the average color in natural photos, so
minimizing error tends to make things browish. That is separate
from leaving faded tone & grain in tact, but it's a second reason
why AI based colorization will tend toward muted color.
justinator - 4 hours ago
I'm also a little confused as to why colorizations always aim to
restore color to the equivalent of a faded color negative, with
muted tonality and grain. Human logic is funny.Well, as a
professional photo editor, you know that if the original b+w
photo captures an image with say, a 50% grey value, you don't
know if the original color was bright red, or closer to that 50%
gray. Bright red has a much higher chroma value, but chroma isn't
recorded in a b+w photo - color saturation is lost. Easily
demonstrated by making an image in your favorite image editing
program that's just straight up red, then changing the image mode
to grayscale, then asking someone else entirely to guess the
original color.That, and I think the style is to mimic hand
tinting photographs, where you would paint right on a b+w photo.
The colors would look "faded" because whatever was used to tint
the photograph needed to have a transparent medium, for the
information of the photograph itself to shine through. That, and
there's just so many colors you could use when hand-tinting. Back
to our 50% gray. What if that was... bright yellow? You can't
tint "bright yellow" onto a 50% gray area of a photograph. Yellow
is highly transparent, and the grey would be too powerful to let
its chroma value shine through.
vwcx - 4 hours ago
Thanks. That's a perspective that didn't really dawn on me
until your comment. I guess in some ways, the muted colors
could be seen less as the colorist's stylistic choice than an
appeal to "safe" representation of chroma values that don't
offer information about vibrancy.
jacobolus - 2 hours ago
I don?t think that?s quite it (i.e. that modern amateur photo
colorists are intentionally aiming for a 19th century tinted
photo style).The bigger problem is that real images have
varying chroma and hue within single shapes, but actually
mimicking that when coloring a black and white photograph
takes a huge amount of skill, attention to detail, and work.
You have to think about what the lighting was like, what
material it was striking and at what angle, what lens filters
the photographer might have used to capture the image, etc.
and then you have to go in and painstakingly paint all of
those fine gradations and textures in.It?s especially
difficult to do a convincing job with skin, but most
materials are hard to color convincingly.It?s much easier to
apply color to whole shapes as a blob, but this looks
terrible (very obviously wrong) when you make the colors very
strong.If you want to try for yourself, get a Photoshop
expert friend to find some color photographs (without showing
them to you first) and convert them to black and white,
applying whatever kind of intermediate processing he/she
desires as long as it results in a roughly photorealistic
looking black and white image.Then you try to
photorealistically colorize the photo, spending as much time
and effort on it as you want. When you have something you are
satisfied with, compare to the colored original. It?s very
likely that the colorized version will look pretty bad in
comparison, even if it looked vaguely okay on its own.
nerdponx - 5 hours ago
I'm also a little confused as to why colorizations always aim to
restore color to the equivalent of a faded color negative, with
muted tonality and grain. Human logic is funny.I always assumed
that if you tried to use "full color" it would look weird, since
the photos themselves usually are quite faded and grainy.
jacobolus - 5 hours ago
More than that, it would look weird because restoring the color
to a photo in a way that looks plausibly photorealistic is
really hard. If you make something more stylized, the viewer
doesn?t have as much reference to compare and realize that
there?s something wrong.To the grandparent: people have been
trying to colorize black and white photos since the 1840s;
complaining isn?t going to stop them now,
https://en.wikipedia.org/wiki/Hand-colouring_of_photographs
vwcx - 4 hours ago
To the grandparent: people have been trying to colorize black
and white photos since the 1840sTrue. But there was also tons
of skepticism in the medium throughout the mid- to late
1800s. Oliver Wendell Holmes' writing on veracity of
photography is a neat reminder that it took the public
decades to come to terms that the photographic process was a
somewhat-veritable facsimile of "real" life.
doppenhe - 6 hours ago
colorization applied to video: http://demos.algorithmia.com/video-
toolbox/
iluvmylife - 9 hours ago
The averaging problem in colorization is interesting. If it learns
that an apple can be red, green and even yellow - how does it know
how to color it?A HN user in an earlier thread suggested to use a
fake/real colorization classifiers as a loss function. [1] But I
still feel that it would not solve the averaging problem. It would
hop between different colors and probably converge to brown. I
haven?t come across a plausible solution so far. [1]
https://news.ycombinator.com/item?id=10864801
matt4077 - 5 hours ago
You could adjust the error function. The common Root Mean Square
error pushes predictions to the average. If you use absolute
errors, or even a logistic function instead, you'll encourage the
model to commit to a decision on a multimodal
distribution.Alternatively, use a discrete colour space and
consider colours as categorical data not implying any ordinal
scale.
zardo - 6 hours ago
>But I still feel that it would not solve the averaging problem.
It would hop between different colors and probably converge to
brown.At least to the extent that GANs work, it works. They will
alternate between the observed colours based on the noise vector.
They do not simply converge to averages, because the
discriminator easily recognizes brown apples as fakes.
emilwallner - 9 hours ago
It could try to classify the apple tree or the context, but it
would require a lot of training data. If it's out of context, it
should select a color based on probability. But it's hard to
solve this with just input and output data. The simple solution
is to use noncontradictory training data, i.e. only having green
apples.I have an urge to teach it simple logic. Instead of making
it brown, it selects the color with the highest probability from
a range of colors. However, I haven't come across a deep learning
implementation like this to mimic.
taneq - 6 hours ago
> If it learns that an apple can be red, green and even yellow -
how does it know how to color it?I dunno, does it look more like
a red apple, a green apple or a yellow apple?
fizx - 5 hours ago
Google "learning xor".
zan2434 - 4 hours ago
Although it is quite egregious here - this is not a problem
inherent to colorization but rather to generative models in
general.Using something akin to a variational autoencoder would
solve this problem, because it learns a distributional
approximation rather than a single point estimate of the color,
and then the random noise vector input allows one to sample from
this output distribution. Similarly, Mixture Density Networks
allow you to model a distribution and then sample from it.
ReDeiPirati - 6 hours ago
It could be really interesting if it could return different
coloured versions and provide a way to explore this different
style.
saip - 6 hours ago
A GAN style approach to learning and generating variants could be
interesting as well. It could generate a couple of hundred
plausible versions. Then you have another network that is trained
to differentiate between fake and real colored photos which picks
the best version.
houqp - 6 hours ago
What's even cooler is adding support for human annotations so
users can selectively give colorization hints for different parts
of the image to customize the output.
flsantos - 9 hours ago
Interesting! If nowadays, pictures are colorized by hand in
photoshop, it wouldn't be practical to colorize a full black and
white movie. I guess this deep learning approach would solve this
problem and colorize old black and white classic movies.
WalterBright - 2 hours ago
I'd like to see more than colorization. Consider the silent movie
"Wings". Very high quality blu-rays are available of it. I would
colorize it, remove the dialog cards and dub the dialog, then add
foley sound effects and a music soundtrack!
saip - 8 hours ago
Agreed. I imagine this has applications in compression as well.
You could stream a movie (or a football game) in black and white
and enable each device to color it on the spot. A similar
technique could also be done for HD/3D/VR.
zardo - 5 hours ago
Yes, you provide a handful of full data keyframes and
reconstruct the details of the stream from the middle out.
kiliankoe - 5 hours ago
That middle out compression has some fantastic Weissman
scores I believe.
roywiggins - 5 hours ago
Coloring football uniforms might be nearly impossible though...
mholmes680 - 7 hours ago
that is an amazing idea.
tzahola - 1 hours ago
Or you can just broadcast the audio from the game and a neural
net will synthesize the video on the fly. The possibilities are
_endless_!
zimpenfish - 9 hours ago
I think colorising b&w movies must already be fairly practical
given the size of this list:https://en.wikipedia.org/wiki
/List_of_black-and-white_films_...
terrabytes - 9 hours ago
This is refreshing. I?ve been learning machine learning through
Kaggle. Recently and I?m a bit tired with the ?tuning
hyperparameter? culture. It rewards people that have the pockets to
spend on computing power and the time to try every parameter. I?m
starting to find problems that don?t have a simple accuracy metric
more interesting. It forces me to understand the problem and think
in new ways, instead of going down a checklist of optimizations.
wrangler99 - 8 hours ago
Ha, it reminds me of what Andrej Karpathy? said "Kaggle
competitions need some kind of complexity/compute penalty. I
imagine I must be at least the millionth person who has said
this." It would be interesting to collaborate/compete on more
creative tasks and have different metrics for success.[1]
https://twitter.com/karpathy/status/913619934575390720
ReDeiPirati - 7 hours ago
So true. Another reason to put constraints in Kaggle
competition is due to production environment. How many winner
models have been used in production? I suspect this number is
near zero. High accuracy with a delayed time makes a ML/DL
artefact not usable in production, because from users point of
view speed is much more valuable than the difference between
97% and 98% in accuracy.
emilwallner - 8 hours ago
I'm also starting to follow people and communities that work with
deep learning in new ways. Here are some of my favorites:[1]
http://colah.github.io/[2] https://iamtrask.github.io/[3]
https://distill.pub[4] https://experiments.withgoogle.com/ai
perturbation - 3 hours ago
You can be a little less brute force if you use something like
hyperopt (http://hyperopt.github.io/hyperopt/) or hyperband
(https://github.com/zygmuntz/hyperband) for tuning
hyperparameters (Bayesian and multi-armed bandit optimization,
respectively). If you're more comfortable with R, then caret
supports some of these types of techniques as well, and mlr has a
model-based optimization (https://github.com/mlr-org/mlrMBO)
package as well.These types of techniques should let you explore
the hyperparameter space much more quickly (and cheaply!), but I
agree - having money to burn on EC2 (or access to powerful GPUs)
will still be a major factor in tuning models.