HN Gopher Feed (2017-12-27) - page 1 of 10 ___________________________________________________________________
Evidence of exponential speed-up in the solution of hard
optimization problems
43 points by godelmachine
https://arxiv.org/abs/1710.09278___________________________________________________________________
stevemk14ebr - 45 minutes ago
Can someone provide a technical tldr. I'm afraid this is over my
head, but I'm extremely curious as I understand it's importance.
CJefferson - 36 minutes ago
My initial instinct is not very important, but we will wait and
see.There have been many, many examples of people achieving
significant improvements on many classes of NP-complete problems,
and more come out every year, of course the more general your
improvement the better. Modern SAT solvers with learning are such
an improvement.The claim of "exponential" seems very dodgy to me,
they are running over a fixed set of benchmarks, it's hard to
measure an exponential improvement over such a set.I will wait
until I see this peer reviewed, after a brief skim read I am a
little worried about how big the circuits they are making
are.EDIT: they mostly compare against 3 specially crafted classes
of random problems. Noone really cares about random problems, and
it looks to me like they made random problems their system would
be particularly good at dealing with. That sets off alarm bells
for me.
ColinWright - 9 minutes ago
Quoting from the abstract: > ... a non-combinatorial approach >
to hard optimization problems that > ... finds better
approximations than > the current state-of-the-art. So it
appears that they're finding approximate solutions, so already it's
rather less significant than the title suggests.Then: > We show
empirical evidence that > our solver scales linearly with > the
size of the problem, ... We know that for most NP-Complete
problems, most instances are easy. The question then is whether
they are testing their algorithms on instances that are known to be
hard. There's a chance they're doing something like finding
factors of random integers, which we know is easy in almost every
instance.I'm deeply pessimistic.
zmonx - 41 minutes ago
The authors have built a start-up based on these
ideas:http://memcpu.com/They provide their SAT solver as a service
that you can try.A related paper I recommend in this context is NP-
complete Problems and Physical Reality by Scott
Aaronson:https://www.scottaaronson.com/papers/npcomplete.pdf
whaaswijk - 30 minutes ago
It's worth noting that Scott Aaronson has commented on previous
work by the same authors:
https://www.scottaaronson.com/blog/?p=2212 . At the time, he was
convinced that their approach wouldn't scale. I'm not sure about
their current approach, but at a glance it seems similar to the
previous one.Previous work: https://arxiv.org/pdf/1411.4798.pdf
colordrops - 36 minutes ago
This keeps popping up every few months for several years, and seems
like BS. Here's an
example:https://news.ycombinator.com/item?id=8652475And here's
Scott Aaronson's
debunking:https://www.scottaaronson.com/blog/?p=2212If these guys
could really solve NP complete problems, they should have some
amazing concrete results to show at this point, which they don't.
Animats - 59 minutes ago
This is really important if not bogus. Any comments from people in
the field?It's like back-propagation for digital logic.
zitterbewegung - 45 minutes ago
Looking at the authors papers it doesn't look worth reading.
kough - 56 minutes ago
"Huge if true"
marmaduke - 56 minutes ago
Did you look at the paper or just throwing the analogy out there?
It seems the described problems are integer programming which
wouldn?t have gradients as in back propagation.The magic seems to
lie in the so called self organizing logic gates
detailedhttps://arxiv.org/abs/1512.05064
Animats - 1 minutes ago
Right. See the PDF of the paper at [1]. They're mapping Boolean
logic into a kind of analog system."SOLGs can use any
terminal simultaneously as input or output, i.e., signals can
go in and out at the same time at any terminal resulting in a
superposition of input and output signals ... The gate changes
dynamically the outgoing components of the signals depending on
the incoming components according to some rules aimed at
satisfying the logic relations of the gate. ... A SOLG ... can
have either stable configurations ... or unstable ...
configurations. In the former case, the configuration of the
signals at the terminals satisfies the logic relation required
... and the signals would then remain constant in time.
Conversely, if the signals at a given time do not satisfy the
logic relation, we have the unstable configuration: the SOLG
drives the outgoing components of the signal to finally obtain
a stable configuration."This is sort of like supervised
training of a neural net. I think. Test cases with both inputs
and desired outputs are needed, and applying them to the
network pushes the parameters towards values that yield the
desired outputs. It's kind of like deliberately over-training a
neural net to encode some explicit function.The paper is vague
about how you train this thing. It seems like it has to be
driven by test cases, but they don't say much about that. It's
not clear that this scales. It ought to work for small numbers
of gates, but for large numbers, does this process converge?[1]
https://arxiv.org/pdf/1512.05064.pdf