HN Gopher Feed (2017-07-26) - page 1 of 10 ___________________________________________________________________
Google and a nuclear fusion company have developed a new algorithm
182 points by jonbaer
https://www.theguardian.com/environment/2017/jul/25/google-enter...___________________________________________________________________
suzzer99 - 2 hours ago
Am I the only one that never reads these articles but just goes
straight to the comments? It seems like reporters always get the
facts bungled and go for the simple story - out of necessity of
course.
trhway - 2 hours ago
for me it is about page loading - pretty much straightforward
successful and predictable on HN and slow and/or heavy, jerking
current position/scroll around and full of whatever else
surprises the page of the source. I want to know what it is about
immediately, i.e. basically it is an issue of instant
gratification for me :) If information on the HN comments page
isn't enough (which is rare), or the source is really
vouched/confirmed to be interesting by itself - then i take the
bullet.
suzzer99 - 1 hours ago
I bet you love usatoday.com.
twic - 1 hours ago
I also do this. HN comment pages are extremely varied in their
quality, but they do tend to be good at shooting down articles
which aren't as good as their titles suggest. Plus, HN loads
fast. So, open the comments, check to see if the article might be
worth reading, and if so, open it, flip back to the main page
while the article loads.
TaylorAlexander - 2 hours ago
On this article I clicked on it, realized The Guardian was going
to be waaay to general, and then went to the comments knowing I
would find a quick break down of the facts written by "my people"
for an audience like me.Often I'll skip the article entirely.
suzzer99 - 2 hours ago
Yep - same thing with that "Roomba is selling maps of your
home" thing that's going around. Turns out they're considering
partnering with Alexa or something, and you'd have to opt in of
course. I just skipped straight to the comments to get the real
scoop instantly.
dekhn - 1 hours ago
I often jump straight to comments, but I think Google's blog post
for this is required reading:
https://research.googleblog.com/2017/07/so-there-i-was-firin...
ZenoArrow - 3 hours ago
Sounds like some promising results, hopefully this approach will
continue to be useful.Addressing the wider article, it always
surprises me that the focus fusion approach is never mentioned in
fusion articles put out by the mainstream media. I don't know what
to attribute that to, but it's surprising that one of the most
promising fusion approaches is constantly overlooked.To give an
idea how drastically overlooked focus fusion is, here's a graph
showing R&D budgets for different fusion
projects...http://lppfusion.com/wp-content/uploads/2016/05/fusion-
funds...... and here's a graph showing energy efficiency of fusion
devices (running on deuterium I believe)...http://lppfusion.com/wp-
content/uploads/2016/05/wall-plug-ch...You'd think that the second
most efficient device would've gotten more than $5 million in
funding over 20 years (I think the original funding was from NASA
back in 1994).
[deleted]
dwringer - 1 hours ago
Maybe it is just confusion in terminology. I always thought
"focus fusion" had something to do with attention processing -
i.e. focusing on multiple (or cycled in rapid succession)
distinct concepts at once where each maps to a different mental
schema and leveraging the tendency of these schema to gradually
take on characteristics of one another until they form a coherent
integrated concept map. It was not an uncommon topic of
discussion online back in the 1990's, although I have not heard
much open discussion about it in nearly two decades. IIRC this
was a big contributing factor to the development of OOP and
something the facilitation of which was a primary objective of
places like SRI's Augmentation Research Center in the days of
Douglas Engelbart. Incidentally the rumor at the time (late
90's/2000?) was that Google had taken over commercial
applications of such research (most of which I presume is still
classified, considering how little discussion there is about it).
grnadav1 - 2 hours ago
You jusk KNOW Elon Musk is gonna beat'em to it ;)
abefetterman - 3 hours ago
This is actually a really exciting development to me. (Note, what
is exciting is the "optometrist algorithm" from the paper [1] not
necessarily googles involvement as pitched in the guardian).
Typically a day of shots would need to be programmed out in
advance, typically scanning over one dimension (out of hundreds) at
a time. It would then take at least a week to analyze the results
and create an updated research plan. The result is poor utilization
of each experiment in optimizing performance. The 50% reduction in
losses is a big deal for Tri Alpha.I can see this being coupled
with simulations as well to understand sources of systematic
errors, create better simulations which can then be used as a
stronger source of truth for "offline" (computation-only)
experiments.The biggest challenge of course becomes interpreting
the results. So you got better performance, what parameters really
made a difference and why? But that is at least a more tractable
problem than "how do we make this better in the first place?"[1]
http://www.nature.com/articles/s41598-017-06645-7
jlarocco - 3 hours ago
As a complete outsider, I don't understand what's special about
the "optometrist algorithm." As described in the Nature article
it's just hill climbing using humans as the evaluation
function.Isn't it basically the same thing they were already
doing but more granular?
abefetterman - 2 hours ago
Basically nobody was using automated gradient descent / etc
because of the proclivity of these algorithms to get stuck on a
boundary. The problem is the boundaries are not well defined.
One example might be a catastrophic instability. If it gets
triggered it has the potential to damage the machine. But the
exact parameters in which the instability occurs are not well
known. So with this algorithm you mix the best of both worlds:
the human can guide away from the areas where we think
instabilities are, the machine can do it's optimization thing.
It's pretty simple overall but enables a big shift in how
experiments are run.Edit to add: these instabilities often look
just like better performance on a shot-to-shot basis, which
makes the algos especially tricky. Using a human we could say
"this parameter change is just feeding the instability" vs "oh
this is interesting go here"
hyperbovine - 2 hours ago
To be clear: there are no gradients here (right?) This is
just 0th order hill-climbing with a human assist.
ouid - 2 hours ago
how does one climb a hill with no gradient? [serious
question]
deepnotderp - 53 minutes ago
Finite differences, you estimate the gradient.
dmurray - 1 hours ago
You can climb a hill without knowing the gradient, so
long as you can compare two points in terms of height.
You randomly move in some direction, then compare the new
point to the old point, go to whichever of them is
higher, and repeat.This sounds like what the
experimenters are doing. Perhaps the GP was alluding to
"first order hill climbing" as evaluating the gradient in
every direction and climbing the steepest one, but the
"0th order" version is also usually considered hill
climbing and is better for some classes of problem.
noobermin - 1 hours ago
That is exactly what they're doing. See the section on
Exploratory Technique, second to last paragraph. As I
said above, the possible innovation here is they can
change midstream the criteria one uses to decide what is
a "better shot".
ouid - 1 hours ago
is it picking a new configuration at random, or does it
still have to be "close" to the last configuration?
dmurray - 38 minutes ago
It still has to be close by some metric to be considered
hill climbing. The article doesn't make it clear, but I
suspect a lot of the insight in the algorithm is how the
computer chooses two similar sets of inputs that differ
in an "interesting" way.
Govindae - 2 hours ago
The naive way to do calculus. Use secants to approximate
the tangent. I think it's called finite difference.
noobermin - 1 hours ago
I am still very skeptical that a human is really that good at
avoiding the problem areas, although they might be marginally
better. Plus, they don't seem to claim that anywhere in the
paper, instead, they just rated shots as either "better"
or"just as good", ie., a local evaluation which won't let you
avoid such areas, which of course is a judgement that
requires more knowledge than just the conditions in the
neighborhood of the current reference.The only thing I think
that can lead someone to your conclusion is they can judge
based on a host of criteria, not just a pre-defined set of
criteria--may be that's what you meant. Of course,
intuitively, changing your criteria midstream would lead to
bias in your judgement, I'd think, but that may be the real
innovation here, that is hard to do without a human judge in
the mix.
erikpukinskis - 9 minutes ago
> I am still very skeptical that a human is really that
good at avoiding the problem areasWhy? Humans have a much
richer model than any computer does right now. We can draw
on an effectively infinite set of possible models
simultaneously. Experts in math and physics can narrow to a
smaller but still huge array of sensical models. Existing
AIs wouldn't even know where to start.The AI doesn't even
have an intrinsic sense of space, seeing has how it lacks a
body. It's a very fast worker that can get things done when
you give it very specific instructions, but it has no real
ability to understand what it is doing or why it would want
to do something different.
amelius - 1 hours ago
Perhaps a stupid question, but why can't the whole experiment be
run as a simulation?
noobermin - 1 hours ago
The system is fundamentally 6^N dimensional with N~10^23.
sixdimensional - 50 minutes ago
Yeah, 6^N dimensions are fun! ;)
zaph0d_ - 57 minutes ago
Even if this would be the dream of a lot of theoretical
physicists to replace experiments with simulations, this must
not happen! Ever! Even if every complex system in the world
could be simulated in reasonable time it would still require
experiments to verify or falsify the simulation results. A
simulation is essentially just a calculation from a model
someone came up with to describe a system. In order to check
how good the model is one has to check it against experimental
data. Just expanding the models without experimental
verification will not necessarily result in a good theoretical
description. It would be like writing software without testing
the components and expecting it to work correctly when you're
done. There was recently an article on HN where economists were
described as the astrologers of our time [1] since they do not
verify their mathematical models to an extent where they can
predict economical systems. This is another example where more
experimental data should be considered in order to falsify
certain theories.Those are the reasons why string-theorist will
not (and should not) get any Nobel price in the next decades.
Since its predictions are hard to measure on those small scales
there's no way of telling if the model is any good until it is
compared against suitable experimental data.[1]
https://aeon.co/essays/how-economists-rode-maths-to-become-o...
scoofy - 26 minutes ago
Agreed. My background is philosophy, and while i rarely get
into the STEM arguments. This has everything to do with
inductive learning vs deductive learning. Any simulation will
be run with the premises already built in, but cutting edge
science is always about learning what those premises are. If
we knew what they were, it'd be trivial to set up the
reactor. Here we need inductive experimentation to learn how
to simulate it trivially.
amelius - 22 minutes ago
I believe this is more about solving an
engineering/mathematics problem, than about fundamental
physics and the scientific process.
yousefvi - 47 minutes ago
As a psychologist, this looks an awful lot like computerized
adaptive testing methods, only instead of estimating some parameter
vector about a person, you're estimating some parameter vector
about plasma.Even the title "optometrist algorithm" is telling,
because that paradigm is a basic model for how a lot of testing is
done, except that it's not the optometrist doing it, it's a
computer.
hailmike - 3 hours ago
I want to start placing "Google and " before stating my
accomplishments."Google and a nuclear fusion company have developed
a new algorithm"sounds way better than:"Nuclear fusion company has
developed a new algorithm using Google"They may not mean the same,
but in today's world faking it until you make it might pay off.
JohnJamesRambo - 4 hours ago
Google didn't enter the race. They helped a company with some
calculations.
dang - 3 hours ago
Ok, we changed the title to the first sentence of the article,
which basically says that.
grayhatter - 3 hours ago
thank you! I was very confused... twice...
dwaltrip - 3 hours ago
There was a talk about the state of nuclear fusion by some MIT
folks linked here on HN a few days ago. One of the biggest
takeaways was that many fusion efforts are very far away (3 to 6+
orders of magnitude) on the most important metric, Q, which is
energy_out / energy_in. Additionally, much press and public
discussion completely fail to discuss this and other core factors
that actually matter for making fusion viable.I remember Tri-alpha
being listed on one of the slides near the bottom left of the plot,
4 or 5 orders of magnitude away from break even, where Q = 1
(someone please correct me if I'm remembering incorrectly).Is the
50% improvement described in the article meaningful, as that would
only be a fraction of an order of magnitude?I understand the
broader concept of combining experts and specialized software on
complex problems is a powerful idea -- I'm just wondering if this
specific result actually changes the game for Tri-alpha.
hyperbovine - 2 hours ago
But hey, string ~20 consecutive 50% improvements together and
you're at four orders of magnitude :-)
a1371 - 2 hours ago
I also watched that MIT talk and it was quite insightful;
however, as I searched a bit more, I realized that the metrics
involved in the presentation are only for achieving surplus in
the generated energy from fusion.When it comes to industrializing
the idea, the scope is far broader. For example, the speaker was
saying that they can use the neutron streams as the result of
fusion for creating tritium. In reality, capturing the neutrons
is much more complex than that [1]. Some of those may deposit on
the inner surface of the tokamak and have to be recovered by 99%
to have breakeven. The nuclear waste is another concern in the
opposite case.Given all these, you get a sense that why companies
like General Fusion [2] get funded. He showed that General Fusion
is very far away in his metrics. But the pinching technology the
company is offering, allows for continuous use of the fusion in
rapid bursts (like an automatic rifle). When I met them at the
Globe conference, they were claiming that they will be ready for
production within 5 years of achieving a surplus. I am not sure
how fast the tokamak can get there.Source: 1.
http://thebulletin.org/fusion-reactors-not-what-they%E2%80%9...
2. http://generalfusion.com/
trhway - 2 hours ago
>One of the biggest takeaways was that many fusion efforts are
very far away (3 to 6+ orders of magnitude) on the most important
metric, Q, which is energy_out / energy_in.NIF got breakeven on
energy_out_of_laser. Granted that just on the scale of 1-2% of
initial energy. Well, it were old, non-semiconductor lasers.
Laser diodes have 50% efficiency. They of course still need to be
focused and pulse formed which will cause losses, yet it is much
closer to the real breakeven. Unfortunately, no more NIF, or more
precisely after closing the fusion work they did start building
super powerful laser diode arrays for somebody else for some
other purposes.Check out Sandia Z as well. Similar results, only
with even better energy delivery efficiency - the energy-to-Xray
conversion there being like 15%. Again, nobody is in a rush to
make good fusion. Political will ( translation - Congress-ionally
approved funds) is just not there.
le-mark - 2 hours ago
I also watched that video, and was also a bit dismayed. It seemed
to me that a lot of projects with the very low Q numbers weren't
at the point of going for high Q numbers. The project I follow is
Polywell[1], and my understanding is they've been working on
confirming the physics of their approach (which involves 'wiffle
ball' confinement) and so have not attempted pushing for break
even energy production.The video was eye opening though, I had no
idea that high temperature super conductors were set to
revolutionize tokomaks. If the potential is there, it seems like
the prudent thing to do would be to reset the ITER project, and
redesign utilizing the current generation of high temperature
super conductors. But I'm just an interested observer, what do I
know?[1] https://en.wikipedia.org/wiki/Polywell
briankelly - 3 hours ago
From the actual journal article:> Two additional complications
arise because plasma fusion apparatuses are experimental and one-
of-a-kind. First, the goodness metric for plasma is not fully
established and objective: some amount of human judgement is
required to assess an experiment. Second, the boundaries of safe
operation are not fully understood: it would be easy for a fully-
automated optimisation algorithm to propose settings that would
damage the apparatus and set back progress by weeks or months.> To
increase the speed of learning and optimisation of plasma, we
developed the Optometrist Algorithm. Just as in a visit to an
optometrist, the algorithm offers a pair of choices to a human, and
asks which one is preferable. Given the choice, the algorithm
proceeds to offer another choice. While an optometrist asks a
patient to choose between lens prescriptions based on clarity, our
algorithm asks a human expert to choose between plasma settings
based on experimental outcomes. The Optometrist Algorithm attempts
to optimise a hidden utility model that the human experts may not
be able to express explicitly.I haven't read the full article nor
do I understand the problem space, but the novelty seems overstated
based on this. Maybe they can eventually collect metadata to
automate the human intuition.Edit: here's their formal description
of it: https://www.nature.com/articles/s41598-017-06645-7/figures/2
pm90 - 3 hours ago
I mean, if it has not been done before, it doesn't look like
they're overstating the novelty. Most algorithms look "obvious"
in hindsight :).
euyyn - 2 hours ago
It's a well-known technique in the out-of-fashion world of
knowledge-based systems: To create an expert system, your
experts often won't be able to articulate their utility
function, so you extract it by presenting them A/B choices.
efm - 1 hours ago
Hot or not, but for dynamical systems optimization.
Kenji - 3 hours ago
In software jargon, this is called a "Wizard" (i.e. installer
wizard, calibration wizard, etc. that guide you through a process
that is more complicated by asking a series of simple questions)
and is an idea that dates back decades.
Sniffnoy - 2 hours ago
If I'm understanding this right, I'm pretty sure this is not in
fact just a wizard. It's using people's answers to "which of
these is better" to learn an objective function that can later
be used for optimization. A wizard is just presenting explicit
choices. Making one requires knowing all the possible paths
and results in advance. You could I suppose have an "implicit
wizard", where every choice was in terms of "Which of these two
examples do you prefer?" rather than explicitly stating what
the user was choosing between, but that would ultimately just
be a more confusing version of an explicit wizard -- it would
still require you to program in all the possible paths and
results in advance. That's much less interesting than this.
[deleted]
tuco86 - 2 hours ago
i also haven't read the full article. I wondered if it was worth
reading before i clicked the link really. How did they determine
it whould be faster than.. what? doing it without computers? And
if it cuts down months worth of computation to just some hours,
can i expect a working fusion reactor in the next 1-2 years
instead of 10-30? How did this become HN #1?
quickben - 3 hours ago
Outside of the title being misleading, I'm sceptical. It's one
thing to have the hardware for research, and completely other to
have the expertise for the research.Google entered the self driving
cars research, and we have yet to see them driven around.This
heavily reminds me of Intel and their diversification, up until
recently, they were in IoT, makers market and what not. One solid
push from AMD and they jumped out of everything way too fast to
track.Google seems the same with the nuclear fusion. They have the
advertising money to throw around, but that just it, they are in
different segment, and from investing side I'm more inclined to
stay away from their stock then buy it.
whatrusmoking - 3 hours ago
huh? https://www.theverge.com/2017/5/10/15609844/waymo-google-
sel...https://www.slashgear.com/waymo-launches-early-rider-beta-
pr...They're easily 5 years ahead of the competition.
bllguo - 3 hours ago
like the other comment, I don't see how your example of self-
driving proves anything.In fact the more I think about it the
more I'm confused by your comment. What are you skeptical about?
Google here has demonstrated their computational resources can be
of great benefit to scientific causes such as nuclear fusion. If
you're saying that you're skeptical Google can do nuclear fusion,
I think you're missing the point.
sremani - 3 hours ago
I do not think that is the case, Alphabet is structured in a way
to have diverse interests pursued. Sure, competition to Google
search will compel them to retrench but not throw away other
organizations hosted in Alphabet domain, unlike Intel.
The_Sponge - 3 hours ago
>Google entered the self driving cars research, and we have yet
to see them driven around.You see their working prototypes flying
around mountain view all the time. And, they've been transparent
with their progress.People have been working on this since the
80s.
jsmthrowaway - 3 hours ago
Considering ?the whine of the electric motor in Waymo?s 25mph
prototype is mildly annoying when my window is open and they
drive by several times an hour? is a real thing in my own life
as a resident of Mountain View, it?s odd to see the assertion
that they don?t drive around.And I live on a side street.
EternalData - 3 hours ago
Google might try to become the conglomerate of all forward-facing
things but it is somewhat funny to see how through it all, it's
their advertising revenues that form the core of the business.
sgt101 - 3 hours ago
I think it's their compute capability and massive interaction
with the concerns of humanity expressed via search; in the short
term that's leveraged to create advertising revenue, in the
longer term who knows?
zitterbewegung - 3 hours ago
This pattern happens more often than you think.Microsoft: They
make an Operating System and Office Suite. From Microsoft
Research they have labs on Quantum Computing, they have five
Turing Award winners (One is Leslie Lamport) and he developed
TLA+ while employed there.Facebook: A social network Funds a
bunch of Deep Learning Research and NLP.Elon Musk: Helped create
PayPal, now does electric cars and rockets, (Tesla,
SpaceX)NVIDIA: Made graphics cards for video games. Now those
same devices allow for deep learning.
rgbrenner - 3 hours ago
Odd to see microsoft in this list. They make money from a lot
more than than just "an Operating System and Office Suite"
(their #3 and #1 source of revenue respectively).What is odd
about the #2 cloud computing company researching quantum
computing?https://www.onmsft.com/wp-
content/uploads/2016/08/msftproduc...And doesn't facebook use
deep learning and NLP in their product? How is that similar to
Google investing in self driving cars (for example)?
beambot - 3 hours ago
Why is that surprising? Look at individuals... they work for
money so that they can pursue their own ambitions. It is rare to
find an individual with the luxury of pursuing their own, exact
ambitions to also earn money.Kudos for Google for using their
vast funds to finance their ambitions rather than just hoarding
it away.
fishnchips - 3 hours ago
Which is exactly what they're trying to change - both with
"conventional" offerings like Android and Cloud, as well as with
what they call "moonshots".
[deleted]
DanBC - 3 hours ago
> it's their advertising revenues that form the core of the
business.You'd think the advertising staff would get pretty good
treatment at Google.Do they?
euyyn - 2 hours ago
Why wouldn't they? All Googlers get very very good treatment.
dekhn - 1 hours ago
when I worked in the ads part, it was not glamorous (as
compared to search, or mobile, or social, or whatever the hot
area was at the time) but I think we were treated well, and
acknowledged for running a critical service that provided
revenue that allowed other parts of the company to do research
and development into new things.I can't speak for the entire
advertising staff, of course.
MrQuincle - 3 hours ago
There are two directions within the energy world that I don't
completely get. One of them is hydrogen storage, the other nuclear
fusion.From what I always understood is that the high-energy
neutrons produced by the fusion reaction irradiate the surrounding
structure and that there is still considerable nuclear waste
(although lifetimes are better than with nuclear fission). Do the
scientists not care or is this outdated info?
openasocket - 3 hours ago
You're thinking of
https://en.wikipedia.org/wiki/Neutron_activationYou need to use
materials that stand up well to neutron bombardment. Many
materials upon neutron capture have a half life measured in
seconds, which isn't a big deal. As nuclear waste disposal goes,
this really isn't a concern.
pm90 - 3 hours ago
The comparison is not b/w fusion and a hypothetical waste free
source of energy. Its b/w fusion and fission. The waste products
of fission are much more dangerous, expensive to handle and we
still haven't found foolproof, effective ways of disposing
them.OTOH, danger from irradiated materials (whatever that is,
this is the first time I'm hearing this TBH) doesn't seem very
pressing. I highly doubt any of the irradiated stuff would have a
half life of millions of years.
wbl - 2 hours ago
Half life is inversely proportional to radiation intensity. The
big issue with fission waste products are the nucleotides
radioactive enough to kill and long lived enough to be
annoying, not the nasty ones that go away in a few decades or
the almost inert ones.
xrange - 2 hours ago
Might as well put a Wikipedia link here about the fission
product isotopes and their half lives:https://en.wikipedia.or
g/wiki/Nuclear_fission_product#Radioa...
Symmetry - 2 hours ago
The neutrons coming from the fusion reactor do have the potential
to make the surrounding material radioactive. But that depends
on exactly what the surrounding material is, some elements will
become dangerous when exposed to neutrons and some won't. An
important part of designing a fusion powerplant, and one of the
reasons it is difficult, is that you have to make sure that the
materials you use can safely handle the neutrons coming from the
fusion reaction without transmuting into dangerously radioactive
forms.This is in contrast to a fission reactor where the fuel
itself turns into dangerously radioactive elements when exposed
to neutrons.EDIT: The exception is that fusion powerplant
designers will want to surround the reactor with lithium in the
hopes that it will absorb a neutron and turn into tritium. The
tritium is then carefully gathered because it forms the the fuel
for the reactor and it's hard to get except in a nuclear reactor.
DennisP - 2 hours ago
I saw a presentation by the head of MIT's fusion department, in
which he said that the waste would only need to be contained for
several decades. It's very different from fission, where the fuel
itself produces long-lived high level waste.With the reactor
discussed in this article, the situation is even better, because
it would use boron fusion. That reaction doesn't produce neutron
radiation at all. There'd just be a tiny amount from side
reactions.
siscia - 3 hours ago
I do have a naive question.Suppose a big breakthrough comes out of
a private company, and such innovation is necessary to use nuclear
fusion.The company will be free to do whatever it pleases with the
technology or it will somehow "force" to let other use, maybe
behind the payment of some royalties.
crusso - 2 hours ago
If they have a patentable breakthrough, they would be able to
restrict use of their discovery for the duration of their patent.
jccooper - 2 hours ago
Patent law generally recognizes the option of the state to
enforce compulsory licensing, though it's rarely exercised.
Eminent domain may also be used to take patents.The modern
approach seems to be to just let people find a way around the
patent, or simply ignore and litigate.
ekun - 1 hours ago
It probably also depends on funding they have received from
places like the Department of Energy and contracts they have
signed for their research being publicly available, but if Paul
Allen is the funding and not the government maybe it's all
private. My own naivety would say billionaires investing in clean
technology would share it with the world, but who knows?
dekhn - 1 hours ago
yep. They could keep it a trade secret (no patent), patent it
and not license it, or patent it and license it so others could
use it under some terms.This only matters for the life of the
patent, and is consistent with the intent of patents.
[deleted]
DrNuke - 3 hours ago
Diversification of the business, me thinks.., nuclear is so big
(but slow) that a penny invested today may become a tenner
tomorrow, just in case.
mtgx - 3 hours ago
I think their universal quantum computer (to be announced later
this year) could accelerate fusion research even more, as I imagine
it could more accurately simulate the atom reactions and
experiments on it. Practical quantum computers may just be what we
were missing to finally be able build working fusion reactors.The
millions of possible "solutions" and algorithms for working fusion
reactors may be what has made fusion research so expensive and
fusion reactors seem so far away. Quantum computers may be able to
cut right through that hard problem, although we may have to wait a
bit more until quantum computers are useful enough to make an
impact on fusion research. I don't know if that's reaching 1,000
qubits or 1 million qubits.
_FKS_ - 1 hours ago
Even if you had the computing power AND if you were simulating
your fusion reactor's plasma in realtime, while it's running AND
you know/can predict the plasma instabilities in realtime (under
a few ms), you still need a way to "counter" those instabilites
in the said plasma. And you need to counter fast, before the
instability "poisons" the entire plasma, something that should
happen within a few ms. If you don't, your entire experiment
stops, and it takes a while to get it back (minutes). So it's not
only about the computing power.