HN Gopher Feed (2017-08-18) - page 1 of 10 ___________________________________________________________________
Docker Is Raising Funding at $1.3B Valuation
181 points by moritzplassnig
https://www.bloomberg.com/news/articles/2017-08-09/docker-is-sai...___________________________________________________________________
baq - 4 hours ago
they should file for an ICO, dockercoin or containotoken would sell
like hotcakes. /s
gue5t - 5 hours ago
Imagine the "value" investors could make if they cordoned off every
useful composition of Linux kernel features into a "product" like
this.
gaius - 5 hours ago
Yeah, this is like Red Hat spinning off RPM as a separate
business...
dman - 5 hours ago
Dont give them any ideas!
dboreham - 5 hours ago
Historically the valuable software products were those that
(somehow) accreted a "Priesthood" who both promoted their use and
restricted widespread understanding (hence solidifying their
position as the high priests). I'm thinking here of : Netware,
Oracle (the RDBMS) and Weblogic as examples.I think what's going
on with Docker is that a Priesthood is being created essentially
without the product, or rather the product as a secondary
matter.Quite a clever business trick if they pull it off.
InTheArena - 5 hours ago
As someone who did containers with LXC before docker, I think
your characterization is off. The value in Docker isn't in the
kernel features, but instead in the user space. Ever try
Openshift 1/2, as painful as that was it was still better then
rolling your own.Docker's big problem isn't what's below them,
it's whats above them. Their monetizing strategy has to involve
the fortune 500, and right now, all of the momentum in that area
is being eaten by Google with Kubernetes. Kubernetes has a very
vocal faction that wants nothing to do with Docker, and replace
it with something else.Docker has plenty of challenges, but they
have delivered significant value, most of which is still open
source.
tyingq - 4 hours ago
Your comment could be interpreted similarly. If the effort for
k8s to move away from docker is minimal, that implies there
wasn't much of anything novel in it.
squeed - 4 hours ago
Indeed, especially as a company that has a reputation for
producing unstable APIs that break between minor versions.This
is a team that seems to think it is acceptable to return ASCII
art and backspaces in their container management APIs:
https://github.com/docker/swarm/issues/1214So, you're right,
but they have a tainted reputation. I wish them all the best,
but I don't have high hopes.
shykes - 3 hours ago
> especially as a company that has a reputation for producing
unstable APIs that break between minor versions.I wish people
stopped repeating this without verifying it first. Here is a
list of past breaking changes in the Docker API:
https://docs.docker.com/engine/breaking_changes/
rrdharan - 3 hours ago
I see the backspaces (as per that issue) but where's the
ASCII art?EDIT: nevermind, found it via one of the linked
issues: https://github.com/docker/swarm/issues/1237
pdog - 4 hours ago
Docker built a better mousetrap.FreeBSD had jails in 2000. Linux
had namespaces and cgroups for years. No one cared until Docker
came up with containers in 2013.
idorosen - 4 hours ago
OpenVZ, user-mode Linux, and others were very widespread with
web hosting and VPS providers since mid-/early-2000s (and
probably a little earlier than that too). Docker didn't invent
containers, but they were a first mover (as far as I know) in
the current flavor of "PaaS" or hosted "non-frameworks" (i.e.
environments that don't force you into the hosting provider's
userland choices such as library versions, etc.).It's easy to
dismiss it as a better mousetrap, but it was an inflection
point in the same way puppet and cfengine before it were
inflection points in configuration management and
"infrastructure as code"...Still, I find the Docker daemon
massively invasive and too much of a potential gaping security
hole. I wish it was more modular and less of it ran
privileged/as root, and don't run it on anything sensitive as a
result. Usually properly configuring a box and relying on
existing process and other forms of isolation (including
namespaces) is good enough, especially with good package
management. Great for UX/workflow tooling in a dev VM though!
shykes - 2 hours ago
> I wish it was more modular and less of it ran privileged/as
rootUnder the hood Docker has been heavily modularized. Check
out https://containerd.io which handles all the low-level
container operations in a lightweight daemon with way less
footprint. You can use it standalone without pulling in the
rest of the Docker platform.
moosingin3space - 4 hours ago
My personal opinion on this is that Docker's application of
Linux namespaces+cgroups happened to solve a specific problem -
reducing differences between prod and dev and doing stateless
deployment - in a way that is pretty simple and didn't break
people's expectations for how Linux works. Prior to that,
namespaces and cgroups didn't have a "killer app".
jaequery - 4 hours ago
why are they called Software Maker?
mperham - 4 hours ago
So they aren't confused with "Pants Maker"?
nxc18 - 4 hours ago
Non-tech people have no idea what Docker is. Still even a lot of
the less up-to-date people know what it is and I suspect
vanishingly few people (relative to something like GitHub) really
understand it beyond the mechanics of the commands.Also the
intentional re-use of shipping verbiage demands disambiguation.
PascLeRasc - 4 hours ago
I've had to use Docker before, and I still have no idea what it
is. It's true that they love to say "virtualization" and
"containerize" the same way the media loves "disavow" and
"recuse".
samstave - 3 hours ago
I love that comparison!
chrisper - 4 hours ago
In the case of Docker, containers = prepackaged ready-to-use
out of the box software. Basically, you run some software
(e.g. mysql) inside a docker container and it comes with
everything it needs preconfigured and installed (e.g.
libraries, configurations,...). This makes it easy to:-
Deploy to many places having the same environment- Update
these programs easily (you just need to download and run a
new container image)- Clean up easily by discarding the
containerI should probably also add that Docker is also a
community to share these application images (containers).
z3t4 - 4 hours ago
I dont understand containers. First you go to through great pain
sharing and reusing libraries. Then you make a copy of all the
libraries and the rest of the system for each program !?
mmgutz - 4 hours ago
I had trouble understanding the benefits of it as well. A
container is a snapshot of a specific version of an
application/daemon and the OS it runs on all bundled into a
deployable/runnable image. You create a container for every
version of an app. Deploying or rolling back is almost as simple
as deploying an image on any host that runs Docker.
chrisper - 4 hours ago
There is also the fact that things stay inside a container
(more or less). You want to remove mysql? No problem, just
discard the container. Compare that to removing mysql and all
its config files etc. from all over the place.
sigjuice - 35 minutes ago
The underlying problem is things like mysql that scatter
things all over the place. Docker only encourages such
sloppiness, IMHO.
lllr_finger - 4 hours ago
Immutability, and consequently determinism, are wonderful traits
to embrace when you're managing deployments across environments
and regions for dozens or hundreds of services.
eatmyshorts - 1 hours ago
And with testing...
k__ - 4 hours ago
Docker users have essentially given up.Nix[0] genuinely tries to
solve the problem.[0] https://nixos.org
eatmyshorts - 3 hours ago
Others have discussed how docker uses a layered approach, and how
two containers that share a base system will share most of the
filesystem and memory.The real power of containers comes with
container orchestration (i.e. Kubernetes, Mesosphere, and
OpenShift). By leveraging containers, container orchestration
systems can provide high availability, scalability, and zero-
downtime rollouts and rollbacks, among many other things. These
things were hard before containers & container orchestration. By
allowing containers to be moved between nodes in a cluster, one
generally achieve higher hardware utilization than with VMs alone
(which is in itself a big improvement upon software on bare-metal
hardware). All of this also leads to easier/better continuous
deployment, as well. This, in turn, leads to easier testing, and
greatly simplifies provisioning of hardware for new projects.So,
the benefits are: - Cheaper than VMs (through better hardware
utilization) - More reliable, through HA load balancers -
More scalable, through scalability load balancers - Better
testing, through CI/CD enabled by containers - Faster
application delivery by simplifying provisioning
peterwwillis - 3 hours ago
Except: - Nobody ever used VMs the way containers are - A HA
load balancer is not a container-specific concept - There is
no such thing as a 'scalability load balancer' - You don't
need a container to do CI/CD - It's not faster, it's slower,
and it's not simpler, it's more complex
geggam - 2 hours ago
This and the fact docker in AWS creates several layers of NAT
_seemethere - 2 hours ago
How is it slower to use containers than traditional machines?
Provisioning for a container is basically just downloading an
image and running it.Sounds like you haven't really used
containers at all.
eatmyshorts - 2 hours ago
- Nobody ever used VMs the way containers are? Many people
are using VMs to get better utilization of hardware by
running multiple distinct apps on the same hardware.
Containers do this, too. Container orchestration makes it
really easy, allowing for even better hardware utilization
than with VMs. You say that "nobody ever used VMs the way
container are", yet I've got multiple clients that deployed
to Kubernetes in large part because of this very point.- I
never said an HA load balancer is a container-specific
concept. I said that "these things were hard before
containers". I stand by that statement. Containers and
container orchestration make HA proxies really, really
simple.- A scalability load balancer is a load balancer in
front of a service that monitors load and scales up, or down,
the number of instances behind that load balancer. Again,
container orchestration makes this really easy.- No you don't
need a container to do CI/CD. But containers make it much,
much easier.- It is faster. I can provision an entire
cluster of machines in about 5 minutes on AWS or GKE. If I
have an existing cluster to publish to, it's even easier--
it's one line in my continuous integration config file.
Container orchestration has a learning curve (I'm assuming
this is why you say "it's more complex"?), but it is
tremendously easier and faster to provision hardware for a
project with containers and container orchestration when
compared to provision actual hardware, or even provisioning
VMs.
bognition - 4 hours ago
Its about creating and maintaining a clean environment.
Conceptually they aren't that different than a VM just much
lighter weight.
dozzie - 3 hours ago
Except that it's not clean environment. Make it a properly
built DEB or RPM package, and then we're starting (but just
starting) to talk about clean.Note also that binary packages
help with repeatable deployment that uses Docker, deployment
tools (e.g. Ansible or Salt), configuration management tools
(CFEngine, Puppet), even manual -- and mix of all these. Docker
images only help with Docker deployment.
chrisper - 4 hours ago
Well, we need to differentiate between application containers
(Docker) and Linux containers (old:openVZ, new:LXD/LXC). Then
there are application containers which are even more "limited,"
like Flatpack.I'd argue that only Linux containers are like
VMs.
imhoguy - 3 hours ago
Docker was LXC wrapper in the past. Now it just handles
cgroup stuff itself. You can run full system in the Docker
but purists regard this as antipattern.
chrisper - 3 hours ago
I'd prefer LXD / LXC anyway. Sadly, LXD porting to Debian
is a slow process.
gaius - 2 hours ago
An interview question I asked recently: what are the advantages
of Docker over an application deployed as a statically linked
binary?
dopamean - 2 hours ago
What kind of answers do you expect/would like to hear?
brianwawok - 1 hours ago
Based on most interviews, he wants you to answer like he
would answer or he wouldn't hire you.
gaius - 1 hours ago
Did you just assume my agenda?
gaius - 2 hours ago
I intend it to be a starting point for a conversation,
someone who really understands it will be able to convince
the skeptic that I play, someone who is just typing the
commands because it's trendy won't. Also it helps if they
know what a statically linked binary is.
fapjacks - 1 hours ago
It is obvious from your comments that nobody will convince
you of any merit Docker has. You are not "playing a
skeptic".
gaius - 1 hours ago
I like containers as a concept, and I like the Docker
commands for building them and running them locally. I am
less convinced by Docker Swarm as a Prod-grade runtime
environment, and skeptical that Docker the company can
build a viable business out of something that will
rapidly become a commodity, I expect the file format will
long outlive the company that created it. But those are
nothing to do with the question really, which can be
answered purely technically.
eatmyshorts - 1 hours ago
Curious--how do you feel about Mesosphere and Kubernetes
vs. Docker Swarm?
peterwwillis - 3 hours ago
It's basically leaning on shitty software engineering to produce
increased reliability in release engineering/operations.The old
model was a system would be running, and a lot of software
components within that system would depend on each other,
creating a web of dependencies. If one of the dependencies had a
problem, it could bring down the whole system [in theory].The new
model is "simpler" in that every software component has its own
independent operating environment, with [supposedly] no
dependency on the others. In this way, if one dependency fails,
it can do so independent of the system at large - the failed
piece is simply replaced by a different, identical piece. In
addition, the component environments don't store state or
anything else that would be necessary in order to replace it.We
basically bloat up our RAM and waste CPU and disk space in order
to be able to understand and support the system in a more
abstract way, in the service of better availability, and also,
better scalability.How is this different from simply running a
bunch of chroot'ed services on a system without containers? It
really isn't. But you get more control over the software by
adding things like namespaces and control groups, and by everyone
using the same containers, more uniformity. By dealing with the
annoyance of abstractions and bloat, we get more human-friendly
interoperability.
femto113 - 4 hours ago
What you're missing is that if two containers have the same base
they will share all of those resources (both on disk and in RAM).
Thus while each container is a complete system you can still fit
many more containers on a given host than you could VMs (which
don't share anything with each other).
xorcist - 1 hours ago
VMs are irrelevant because containers are not VMs.
wpietri - 1 hours ago
Seems relevant to me. People have been using VMs for
situations where they can now use containers.
rwmj - 4 hours ago
Hmm, well sometimes that is true. Because containers don't
encourage stable APIs, another scenario is that each container
has slightly differing versions of each library because of
unnecessary API changes (or in the positive case, because they
extensively test the program with a particular library
version).
SomeStupidPoint - 3 hours ago
Containers are how we managed to have something like five or
six versions of Python in our production environment.
subway - 3 hours ago
How do you apply security updates to 'five or six versions
of Python' without losing your mind?
SomeStupidPoint - 1 hours ago
You don't.There's no good reason we should have had that
many in production. We had three versions of the 2.X
series and two versions of the 3.X series because of
mixing-and-matching base images we used plus management
deciding that we could do partial upgrades of Python
version by upgrading a project at a time. (We switched
from 2 to 3, which meant we had containers -- with
different base images -- where we updated the 2.X version
but not the 3.X version and containers where we updated
the 3.X version but not the 2.X version. This gave us all
kinds of mixes and matches of Python 2/3 versions.)So I
just hoped whoever was maintaining base images was
actually maintaining their security patches, kept the
versions we were intentionally using up to date during
container construction, and it (mostly) just sort of
worked out.We're down to... 3 versions of Python and 3
base images. I'm trying to get down to 2 versions of
Python (a 2.X and 3.X).
rwmj - 2 hours ago
I can see two different versions being necessary (Python
2.x and 3.x), but is it really necessary to have versions
other than the latest of each?
imhoguy - 3 hours ago
With block/page level deduplication you can achieve even better
results for VMs (ZFS, KSM etc). Containers can run on such
setup too.
moondev - 1 hours ago
Immutable infrastructure. And there should be no pain involved if
the ci/cd pipeline is automated correctly.
[deleted]
alexnewman - 1 hours ago
It's time for the cash grabs before the economy implodes
jdoliner - 5 hours ago
There's a couple of things in this article that I don't think are
true. I don't think Ben Golub was a co-founder of Docker. Maybe he
counts as a co-founder of Docker but not of Dotcloud? That seems a
bit weird though. I also am pretty sure Docker's headquarters are
in San Francisco, not Palo Alto.
Patrick_Devine - 50 minutes ago
You're correct with both assertions. As someone who lives in
Palo Alto and works for Docker, I can only wish that it were
headquartered there.
pbiggar - 2 hours ago
Cofounder is a title, like any other. It doesn't just refer to "I
was there at the start of the corporate entity". If they're
calling Ben a cofounder, then they certainly mean it. You're
right that he wasn't there at the start of Dotcloud, but that
doesn't mean that he didn't have (or didn't earn) the title of
cofounder of Docker. Ben's linkedin has "co-founder" of on it, so
it certainly wasn't the article making a mistake.
eldavido - 1 hours ago
Cofounder is sort of nebulous, it doesn't carry any official
legal authority and isn't an "officer" of the corporation in
most senses, though it has a lot of cachet in SV.I view it as a
relatively cheap bargaining chip that can burnish a person's
reputation, but has little descriptive value in terms of what
someone actually did to move the business forward.
pbiggar - 1 hours ago
I disagree. You're not exactly handing the title out willy-
nilly. If they're calling Ben a cofounder, I'll be that has
significant meaning to them.
eldavido - 38 minutes ago
Sure, but it's a lot different from being a CEO,
shareholder, member of the board, or high-level corporate
officer. "Founder" is something you can bestow
retroactively, not so much with "CEO".And I'm seeing all
sorts of companies hand out titles like "Founding
Engineer", "Founding Developer", etc. with the same chump-
change 100k/0.1% option packages as before.
Steeeve - 2 hours ago
The guy who came up with chroot in the first place is kicking
himself.
[deleted]
neolefty - 2 hours ago
By analogy, there's a lot more to a Model T than an internal
combustion engine.
slim - 2 hours ago
Docker is funded by In-Q-Tel
eatmyshorts - 1 hours ago
I'm very curious what your concern is in this? I understand the
CIA interest and value in PayPal, Facebook, and Google. But what
do they gain from Docker, other than as a simple investment?
Backdoors or something?
fapjacks - 1 hours ago
The shadow government is containerized.
raiyu - 45 minutes ago
Monetizing open source directly is a bit challenging because you
end up stuck in the same service model as everyone else. Which is
basically to sell various support contracts to the fortune
100-500.Forking a project into a enterprise (paid for) version and
limiting those features in the original open source version,
creates tension in the community, and usually isn't a model that
leads to success.Converting an open source project directly into a
paid for software or SaaS model is definitely the best route as it
reduces head count and allows you to be a software company instead
of a service company.Perhaps best captured by Github warpping git
with an interface and community and then directly selling a SaaS
subscription and eventually an enterprise hosted version that is
still delivered on a subscription basis just behind the corporate
firewall.Also of note is that Github didn't create git itself, and
instead was done on the direct need that developers saw themselves,
which means they thought what is the product I want, rather than,
we built and maintain git, so let's do that and eventually monetize
it.
muppetman - 3 hours ago
Gosh, how did the Internet, services, etc every work without
containers?Oh, that's right, perfectly.
ralmeida - 2 hours ago
That may be true, but should we then stop trying to evolve
technology? The idea of any advance is to either keep quality and
reduce costs (time and money investment), or to improve
quality.Since by your opinion, the Internet already works
perfectly, the good Docker brings to the table is to keep it
working perfectly, but requiring less resources (time and money)
to do so.
wpietri - 1 hours ago
Heh. I am always amazed to see comments like muppetman's. Why
get into technology if you don't like to see things improved?It
reminds me of this bit from Douglas Adams:"1. Anything that is
in the world when you?re born is normal and ordinary and is
just a natural part of the way the world works.2. Anything
that's invented between when you?re fifteen and thirty-five is
new and exciting and revolutionary and you can probably get a
career in it.3. Anything invented after you're thirty-five is
against the natural order of things."
foota - 3 hours ago
My first reaction was that I was surprised it wasn't higher.My
second reaction was incredulity at how ridiculous my first reaction
was.
derricgilling - 2 hours ago
They seem to be in a unique position to monetize with multiple
models. Compared to something like Github, Docker and Docker
Swarm has much more opportunity for large enterprise support
contracts. Most companies feel that Docker is synonymous with
containers, yet still don't fully understand the technology. Git
already had a high adoption years before Github was around so
developers were familiar with it (or other VCS) and didn't need
to spend thousands of dollars to support a git repo.On the other
hand, curious how much revenue DockerHub is bringing in and where
they will plan on taking it. That model seems closer to Github.
Will it be a newer way to discover new OS or even propriety
images like how devs use Github?
[deleted]
[deleted]
shams93 - 3 hours ago
Redhat has a market cap of around 15 billion, that gives a rough
idea of what kind of value Docker could build with a purely
service based model for an open source product.
gaius - 2 hours ago
Redhat has a vast product suite now - they do a heck of a lot
more than just RHEL, and a lot of it is "enterprise" stuff with
juicy support contracts. Docker is a one-trick pony in
comparison.
dopamean - 2 hours ago
Wasn't RedHat a one trick pony at one point? I don't think
it's insane to be that they'll be as successful as RedHat one
day and it seems these investors would like to bet on it.
MrOwen - 1 hours ago
I think (with the exception of OpenShift), most of RH
products are acquisitions. I can think of plenty of
acquisitions Docker can make to take a similar path;
Portainer, Kotena, some CI software (Drone, etc.) All that
would kind of make Docker Inc a one-stop shop for startups
(definitely) or whoever else wants everything from one
company.
eatmyshorts - 3 hours ago
My bet is that Docker develops a very healthy services business,
offering consulting services to companies that wish to use Docker
within their organization, on par with RedHat on Linux. With
that, a $1.3b valuation is the tip of the iceberg.
throw2016 - 4 hours ago
Docker generated value from the LXC project, aufs, overlay, btrfs
and a ton of open source projects yet few people know about these
projects, their authors and in the case of the extremely poorly
marketed LXC project even what it is thanks to negative marketing
by the Docker ecosystem hellbent on 'owning containers'.Who is the
author of aufs or overlayfs? Should these projects work with no
recognition while VC funded companies with marketing funds swoop
down and extract market value without giving anything back. How has
Docker contributed back to all the projects it is critically
dependent on?This does not seem like a sustainable open source
model. A lot of critical problems around containers exist in places
like layers, the kernel and these will not get fixed by Docker but
aufs or overlayfs and kernel subsystems but given most don't even
know the authors of these projects how will this work?There has
been a lot of misleading marketing on Linux containers right from
2013 here on HN itself and one wishes there was more informed
discussion that would correct some of this misinformation, which
didn't happen.
SwellJoe - 3 hours ago
I was always amazed/impressed/annoyed that when Docker exploded
in popularity it was a tiny pile of code, and a bunch of other
mature and mostly mature pieces from the OSS community, and yet
"Docker" was what everyone recognized from that equation...even
though 95% of the code was something else. Possibly the most
amazing thing was that Docker was so successful while doing less
than LXC.It was incredibly impressive marketing, no matter what
else we might say about it. And, since then, of course, they've
leveraged that early success into a real platform and they've
since built a much bigger pile of things (some are even really
good). It's kind of a testament to the wildly over-optimistic
approach of Silicon Valley product-building, and maybe also kinda
damning of the sort of pillaging that the process often
involves.I had my reservations about Docker in the beginning, for
all the reasons I mentioned above, but I'd be unlikely to bet
against them, honestly. And, they have contributed quite a bit to
OSS. I've been poking around in golang, lately, and have found a
lot of libraries in my area of interest created by the Docker
folks. They're seemingly doing good things, though not on par
with Red Hat, which is entirely an OSS company.
sah2ed - 43 minutes ago
> Possibly the most amazing thing was that Docker was so
successful while doing less than LXC.This is where I agree with
Peter Thiel's very succinct (although somewhat philosophical)
definition of the term technology in "Zero To One": it allows
us to do more with less.Sometimes, merely re-imagining a better
way to use existing tech could unlock new kinds of productivity
previously unimagined. One of the best examples of this is the
iPhone.
imhoguy - 3 hours ago
I would add cgroups concept in Linux kernel in general. You can
run Docker images with small bash script and systemd-nspawn
alone.
throw2016 - 3 hours ago
Its namespaces. Cgroups can be used to limit any Linux
processes and do not have anything in particular to do with
containers. Userland container managers use it to limit mainly
cpu and memory resources to the container processes.Its kernel
namespaces that enable containers and allow one to launch a
process in its own namespace(there are 5 namespaces containers
use).LXC/LXD containers launch an init in this process so you
get a more VM like environment. Docker does not launch an init
so you run a single app process or a process manager if you
need to run multiple processes. Systemd Nspawn is similar to
LXC.
alexbeloi - 2 minutes ago
>Who is the author of aufs or overlayfs? Should these projects
work with no recognition while VC funded companies with marketing
funds swoop down and extract market value without giving anything
back. How has Docker contributed back to all the projects it is
critically dependent on?Here's an exert from a docker blog post
from 2013[0]>>Under the hood Docker makes heavy use of AUFS by
Junjiro R. Okajima as a copy-on-write storage mechanism. AUFS is
an amazing piece of software and at this point it?s safe to say
that it has safely copied billions of containers over the last
few years, a great many of them in critical production
environments. Unfortunately, AUFS is not part of the standard
linux kernel and it?s unclear when it will be merged. This has
prevented docker from being available on all Linux systems.
Docker 0.7 solves this problem by introducing a storage driver
API, and shipping with several drivers. Currently 3 drivers are
available: AUFS, VFS (which uses simple directories and copy) and
DEVICEMAPPER, developed in collaboration with Alex Larsson and
the talented team at Red Hat, which uses an advanced variation of
LVM snapshots to implement copy-on-write. An experimental BTRFS
driver is also being developed, with even more coming soon: ZFS,
Gluster, Ceph, etc.I think the real nature of your complaint is
that you want the recognition to be in the form of cash.[0]
https://blog.docker.com/2013/11/docker-0-7-docker-now-runs-o...
bostand - 3 hours ago
Docker is swimming in cash for some reason only people in SF
understand.In the mean time LXD is developed by a single guy...
kev009 - 3 hours ago
Do they actually have any significant revenue? I love developer
tools companies, but there are several tools upstarts that have no
proven business model. They look like really bad gambles in terms
of VC investment, unless you can get in early enough to unload to
other fools.
eatmyshorts - 1 minutes ago
I don't know their numbers, but it has been reported that their
2016 revenue was north of $10m, and that their 2017 revenue is
more than 2x that. This is just a guess, but I'd guess that they
are seeing >300% year-over-year revenue growth, and that they're
projected to see >$50m in 2017. I would guess 20x current
revenue, so a $1.3b valuation would mean roughly $10m per month
in July 2017 (starting from $2-3m in Jan 2017). If this is
accurate, I'd be glad to invest in Docker at a $1.3b valuation.
new299 - 4 hours ago
I'm so curious to understand how you pitch Docker at a 1.3BUSD
valuation. With I assume a potential valuation of ~10BUSD to give
the investors a decent exit?Does anyone have an insight into
this?Looks like Github's last valuation was at 2BUSD. That also
seems high, but I can understand this somewhat better as they have
revenue, and seem to be much more widely used/accepted than Docker.
In addition to that I can see how Github's social features are
valuable, and how they might grow into other markets. I don't see
this for Docker...
elmar - 4 hours ago
You pick the Valuation, I set the
Terms?https://medium.com/@Alexoppenheimer/you-pick-the-
valuation-i...
jasode - 4 hours ago
>Does anyone have an insight into this?I don't have insight but
I'm guessing the valuation comes from the prospects of enterprise
sales.[1]Also, Docker (the company) isn't staying still. We have
to assume they will evolve to add future upmarket services on top
of Docker that companies will pay for. Investors would see the
PowerPoint slides for those future products but we as outsiders
don't.Maybe an analogous situation would be the Red Hat multi-
billion valuation even though Linux kernel itself is free and
open source.[1] https://www.docker.com/pricing
programd - 4 hours ago
Their enterprise pricing is per node. So if enough companies
sign on to this, ever box/VM they put docker on brings Docker
Inc. revenue. I'm sure at large scale this will be more of a
flat rate negotiated deal ($XX,XXX per year) but still that's a
lot of potential cash on the table.The risk of course is that
the nobody will want to pay per-node and the community will
just invest in the open source container ecosystem and
replicate the Enterprise features with plugins and forks.Still,
the market might be big enough that they can become another Red
Hat just based on support and stewardship revenue.
axus - 4 hours ago
VMware did pretty well, though!
ccmonnett - 2 hours ago
I think Docker would be quite happy being the 'next
VMware', and as an ex-VMW person myself it's hard for me to
fault that, too.
bluGill - 3 hours ago
There are enough things I find as a company user annoying
about docker to make me suspect some company will help fund
the community fork. We are not quite there, but if docker
keeps making changes that break us.
new299 - 4 hours ago
Becoming another Redhat wouldn't be enough to justify
investment at this valuation though would it? They need an
upside potential 5 to 10x higher than Redhat's current
valuation.
yeukhon - 3 hours ago
Well, valuation is a gamble. You can try to justify the
potential growth. The tech community is embracing the
adoption and maturity of container-based CI/CD deployment.
In the end, some number trick is done, higher valuation is
posted, investors (new and old) are going nuts. Some months
later, the big guy cash out large of the initial buying and
move on to the hot baby in town.
ransom1538 - 4 hours ago
In 2007 we used to rack hardware. Like physically open boxes,
rack hardware with Ethernet cables. In 2014 I don't think a
startup existed trying to use physical machines - it was all
aws,etc. We then would use all this BS like puppet, salt, chef,
etc to manage all the chaos (devops). Now 2017+, I am starting
to see containers become the defacto. I see the future as
running a docker file on your laptop, running it through a ci
system, then pushing it to a container service. Docker is the
clear standard.
nawitus - 4 hours ago
" I see the future as running a docker file on your laptop,
running it through a ci system, then pushing it to a container
service. Docker is the clear standard."Where I work this has
been the standard for over a year now.
samstave - 3 hours ago
Would you mind just expanding on exactly what your ci
pipeline/stack looks like which your company made its
standard?
nawitus - 3 hours ago
Jenkins 2.x for building the images (we just execute docker
build and push in a Jenkinsfile), then we use Kontena for
orchestration (e.g. the Kontena agent pulls the image from
the docker registry). For local development we use docker-
compose.Jenkins and the Jenkins slaves also run on docker,
and are managed by Kontena. In fact, the whole
platform/pipeline runs completely on Docker. We use Ansible
to install Kontena/docker on new servers.
nrb - 3 hours ago
The method I've been using successfully:1. Git commit
triggers CircleCI build and test phase2. CircleCI deploy
phase uploads the image to GKE3. Google Container Engine
stages the deployment for release
throwaway0071 - 4 hours ago
Except you don't need Docker Inc. for that. Many container
runtimes exist now, the user experience is very easy to
replicate. Plus, their cloud doesn't seem to be making any
money. So I too have trouble understanding where this valuation
is coming from.
thinbeige - 4 hours ago
> user experience is very easy to replicateWrong. The built-
in Docker Swarm has currently the easiest UI for container
orchestration (there is still stuff which could be done
better though but still). This paired with sensible defaults
and batteries included, such as a load balancer make Docker
the clear winner and apparantly nobody has been able to
replicate the UX. I know k8s has a bigger market share but is
also way more complex.Edit: Why the downvotes? Afraid that
your k8s know-how will drop in market value? Please reply
with valid counter arguments instead of this maddening silent
downvoting. If I was wrong let me know where and why.
Sivart13 - 3 hours ago
The downvotes may be because you matter-of-factly retorted
"Wrong." to a subjective opinion. But also maybe, don't
sweat the downvotes?
meredydd - 3 hours ago
> Why the downvotes?Honest answer: It's rude to open with
the one-word sentence "Wrong." You compounded it by
implying that anyone downvoting is doing so in bad faith.
Neither is a good look. The rest of your comment was fine.>
Please reply with valid counter argumentsSure! So,
Kubernetes is Google's attempt to make real, full-strength
Google-ish infrastructure "as simple as possible, but no
simpler". This kind of infrastructure is really hard, so
"as simple as possible" is still quite complicated. This
makes k8s a pain in the ass to understand and use.Docker
swarm comes from the opposite end - it's dead simple to
use, and seems to be aiming for the 80% use case. After
all, most companies are not Google, and can work with a
less complicated solution that offers a "Just Push Go"
experience. The downside is that it's less flexible and
less robust. (I also get the distinct sense that the
engineering was rushed. But that can be fixed if it stays
popular for long enough - eg I hear MySQL is decent these
days.)The potential problem, as kuschku is pointing out, is
that the bigger, more Enterprise-y and more lucrative
customers become, the more likely they are to want the
power and robustness of Kubernetes. This presents an
existential threat to Docker Inc. They could end up fully
commoditised, building a vital platform that provides tons
of value, but which they can't charge for because all the
big support contracts go to Kubernetes Managed Services
Providers or whatever.
kuschku - 4 hours ago
The problem there is in the Enterprise market, which Docker
is targeting, K8s is far better suited.
xorcist - 1 hours ago
Not only is Kubernetes in a better place than Swarm, the
core Docker engine is only a cog in that machine. Should
the need arise it would be trivial for them to swap it
out with another own container engine.Kubernetes is the
magic sauce, not Docker.
[deleted]
nawitus - 4 hours ago
So what is a good replacement for docker?
eatmyshorts - 3 hours ago
rkt, containerd, LXC, runC, OpenShift. There are others,
too. rkt and OpenShift seem to be the most common ones
that aren't Docker.
bonzini - 3 hours ago
OpenShift for example. It uses docker containers and can be
used for both application containers and microservices.
bdcravens - 1 hours ago
Keep in mind the distinction between running docker
containers and what parent said: "You don't need Docker Inc
for that"
gaius - 2 hours ago
Except you don't need Docker Inc. for thatExactly. Build your
docker image with the free open source tools then push it
onto Azure, for example. How does Docker the company see any
revenue here? Or if you are running on-prem with DC/OS in a
private cloud. I don't know anyone using Docker's own cloud,
and why would you? They need to sell either services or an
"Enterprise" version that is better than what you can do
without them. I think containers are definitely the future,
but I also see containers as being just a commodity, no more
exciting that Makefiles and RPM are now. The money will be in
running them, and Azure, AWS et al will have that stitched
up.
praneshp - 12 minutes ago
Docker has at least one product (Docker data center) that
is aimed at enterprises. I know because I was almost going
to go work there on that product. It's not just a container
company.
ransom1538 - 4 hours ago
Sure. You can host your own github too. But the future will
be containers. They seem pretty good at that.
mbreese - 4 hours ago
The question isn't if Docker the company will be successful
in the future -- it's will they be as successful as that
valuation suggests by selling support contracts.
smt88 - 4 hours ago
You can host your own GitLab*, not your own Github.Also, a
lot of the value of Github is social. You can only get that
(after a certain point) by paying Github money.By contrast,
you can get 100% of the value of the Docker community
without paying a cent to Docker, Inc.
keeran - 3 hours ago
FWIW you can host your own
GitHubhttps://enterprise.github.com
alfalfasprout - 3 hours ago
and this is important because a good deal of tech
companies rely on self-hosted GHE or hosted Github.
VT_Drew - 2 hours ago
$21 a month per user.....to host it on my own
hardware....no thanks. If I am hosting on my own hardware
then you get a one-time-fee, none of this subscription
nonsense.
mosdave - 1 hours ago
glad I'm not the only one to find this subscription BS
patently insulting.
zjaffee - 4 hours ago
The kind of companies that spend large amounts of money on
things like support, think more traditional enterprises, are
spending a ton of money on docker's paid services, because
they do not have the ability to hire enough engineers to
implement something in house.
maxfurman - 4 hours ago
I think you are underestimating the power of a name brand.
Docker is synonymous with containers to a lot of people, and
if they want to pay for container support they'll pay Docker.
Similar to, as other commenters have said, people pay Red Hat
just so they can have someone to complain to when there's a
problem.
radicalbyte - 2 hours ago
This. I've been working in Government recently, it's all
containers and it's all Docker.If anything I'm surprised
that their valuation isn't higher.
koolba - 2 hours ago
Are they paying Docker (the company) for anything? Or
just using containers as a part of a wider software
infrastructure?I've been using Docker since early 2014
and they haven't gotten a dime from me. If you count the
storage and bandwidth on the public registry I've cost
them money.
throwaway5752 - 2 hours ago
Container format brand? I bet Dell/HP/et al thought they
had a datacenter advantage with their brands over whitebox,
but we've seen how that has worked out.What you described
is a services/support company, something Red Hat has worked
really really hard not to be over the years (to debatable
degree of success). Being their for support vs OSS is not
how you get great margins.Anyway, at this level of the
stack in prod settings brand won't take you that far.
alasdair_ - 3 hours ago
I just checked - I had no idea that Red Hat had a market
cap of 17.8BN right now. I'd assumed it was about 10% of
that.
sah2ed - 16 minutes ago
I wonder where Marc Fleury [0] is these days. He sold
JBoss to RedHat for a cool $350m back in the day.[0]
https://en.wikipedia.org/wiki/Marc_Fleury
user5994461 - 2 hours ago
Companies buy support from RedHat openshift or Pivotal
CloudFoundry to run containers. They never pay Docker Inc.
giobox - 2 hours ago
> They never pay Docker Inc.That's simply not true.
Having a support team they can call is, rightly or
wrongly, a large part of the reason some companies
consider buying Docker Enterprise Edition at all.
user5994461 - 56 minutes ago
Having a support team they can call is a large part of
why they pay RedHat or Pivota for docker.
computerex - 3 hours ago
Most major cloud providers provide support for docker and
private docker registries out of the box.
[deleted]
tw04 - 2 hours ago
>In 2014 I don't think a startup existed trying to use physical
machines - it was all aws,etc.You're also starting to see those
startups realize how much money they're wasting on cloud
services once they hit scale. It is EXTREMELY expensive to do
cloud if you're even remotely efficient with your
infrastructure unless you've got extremely bursty and
unpredictable workloads.
ransom1538 - 2 hours ago
"once they hit scale"Most don't. That is the value.
anton-107 - 2 hours ago
But in return it's also extremely cheap to bootstrap and fail
user5994461 - 2 hours ago
It is much cheaper to run clouds than to buy and operate your
own hardware.The cloud is also a lot easier to optimize and
save money.
Deimorz - 1 hours ago
It seems easier to optimize because you're starting out so
far away from the goal, so you have a lot of space to
improve. The money you mostly end up "saving" is money that
you didn't really need to spend in the first place, but you
chose to spend more instead of going through the optimizing
effort up-front (which can be a perfectly valid approach).
venantius - 2 hours ago
You're getting downvoted, but your answer is partially
correct: when you're a small startup, it is almost
certainly cheaper (in terms of time and operational
complexity) to outsource infrastructure to the
cloud.However, once you hit significant scale, the lessons
from the operational experience of all of the major firms
have been pretty consistent: it's cheaper to operate your
own data centers than it is to outsource them.
localhost - 1 hours ago
Depends on what you mean by scale. If scale means
providing services globally to many jurisdictions, you're
going to have to start doing the things that the big
players so like putting data centers in specific
geographies (e.g., EU, China). That means getting real
estate, power etc. At that point I think the scale tips
in the other direction where it's now more costly for you
to "scale". So, cheap at the small end, cheap at the high
end and expensive in the middle (cloud).
owenmarshall - 1 hours ago
Bingo.From the perspective of a Fortune (checks list) 15
company: AWS saves us a fortune. No facilities costs all
over the world with power/real estate/lawyers to handle
local laws. No data center engineers across the globe. A
solid discount on list. Consistent bills (thanks to
judicious RI buys) and servers that are available in
minutes - not weeks (have you ever seen enterprise IT
ticketing practices?!)If we had two orders of magnitude
fewer employees/servers/locations AWS wouldn't make
sense. But at this scale nothing else makes sense.
bsder - 1 hours ago
Not even close to universally true.Bandwidth is almost
always cheaper at a colo. Most compute instances are
cheaper to buy and rack if you have continuous loads. Disk
... is tricky.It is faster and has lower up front costs to
get clouds up and running initially. For most startups who
are going to fail, that's Good Enough(tm).If, however, you
continue existing for a while, the other things start to
add up at much lower levels that you would expect. I'd say
the crossover is around when you are spending about $15,000
per year. Your colo is about $10,000 of that per year and
you can rack 5 new machines every year for the remaining
$5K. That's not that much for an actual business.Cloud is
good for your initial startup and for bursty situations.
Once you have continuous loads, you need to be moving to
pulling stuff back to your own hardware.
mdorazio - 1 hours ago
How does employee cost factor into that, though? I don't
do devops, so I don't know at what scale you start
needing dedicated people to manage hardware and
configuration at a colo. However, employees are really
expensive in comparison to just about everything else, so
even if you're spending an extra $10K a year on AWS than
you would at a colo, if using it saves you just 10% of
one person's time you're coming out ahead.
bsder - 45 minutes ago
Except that, invariably, I'm not saving that person's
time.It's a real myth that the cloud magically saves me a
sysadmin. I have found almost exactly the opposite.
Using the cloud effectively takes more time and more
expertise. Debugging the cloud effectively takes WAY
more expertise. Combine this with the fact that someone
has to be able to architect a system to fit within the
constraints of being on the cloud, and you're down extra
employees.The difference is in: "Eh, it's been down for 3
days? Sigh. Just reboot it." vs. "Um, that request hung
for 93 seconds. Why?"In the first case, the cloud is
fine.In the second case, someone is going to have to
traipse over an enormous amount of systems (which you
don't own and can't always instrument) and variables
(some of which you aren't even aware of existing) to hunt
it down. If, however, you can say "Pull that off the
cloud into our own systems and keep an eye on it." you
have made your debugging life a lot easier.Of course,
once you have the ability to do that, your team realizes
and starts asking: "Given how much time we spend
debugging issues with the cloud that aren't actually our
fault, why are we on the cloud again?"I always smile when
that realization kicks in. Now I generally have to stop
the team from pulling everything off the cloud. However,
that's a much easier task.
elmar - 34 minutes ago
The epic story of dropbox exodus from the amazon cloud
empirehttps://www.wired.com/2016/03/epic-story-dropboxs-
exodus-ama...But some companies get so big, it actually
makes sense to build their own network with their own
custom tech and, yes, abandon the cloud. Amazon and Google
and Microsoft can keep cloud prices low, thanks to
economies of scale. But they aren't selling their services
at cost. "Nobody is running a cloud business as a charity,"
says Dropbox vice president of engineering and ex-
Facebooker Aditya Agarwal. "There is some margin
somewhere." If you're big enough, you can save tremendous
amounts of money by cutting out the cloud and all the other
fat. Dropbox says it's now that big.
kristianc - 2 hours ago
> Now 2017+, I am starting to see containers become the
defacto.Why would containers become the de facto rather than
something like Cloud Foundry which abstracts it away entirely?
Docker is just a slightly less messy version of the Puppet,
Salt, Chef Devops BS with added complications around
networking.
new299 - 4 hours ago
Right... I just don't see that being worth 10B USD I guess. I
don't see how you come up with that number, and I assume this
is where the "upside potential" needs to be.In the case of AWS,
I can see the advantage, you're renting a resource you need,
rather than buying it. I guess I just don't see how you extract
similar amounts of revenue from Docker...Redhat is valued at
~1.3B USD. Github, currently at ~2B USD. How do you justify 10B
USD for Docker. Obviously you can (they did), I just don't
understand how this is done well. Does anyone have a better
insight here?
objclxt - 4 hours ago
> Redhat is valued at ~1.3B USD.Redhat's market cap is $17
billion, so I'm not sure where you're getting that figure
from. https://www.google.com/finance?q=NYSE:RHT
comstock - 4 hours ago
I guess I was looking st their total equity. But I guess
you're correct and market cap would be a better number to
look at.
[deleted]
bonzini - 4 hours ago
Red Hat's revenues are a little short of 3 billion (though
growing fast), and the market cap is about 15-20 (going by
memory). That is with a full software stack going from IaaS
to containers to application server. Right now containers are
the thing, but what if they are going to become commodity in
5 years? And can the Docker brand compete against the Google
and Red Hat brands that are also selling container management
platforms (fully open source in the case of RH, no enterprise
features and no lock-in)? It seems like a very generous
valuation.Disclaimer: working at RH
[deleted]
Silhouette - 3 hours ago
I wonder what advantages Docker offers over plain LXC and LXD
for a lot of real world use cases these days, though. Does it
have some unique, defensible benefit that justifies this sort
of valuation?I haven't set up enough systems to have a strong
opinion on this one myself, but those I know who definitely
have seem to come down in favour of plain LXC and possibly LXD
in most scenarios. Typically, their argument is that the
features you probably want are there anyway, and so the extra
weight of the Docker ecosystem now seems to introduce more
irritations than it fixes.Sometimes they seem to distinguish
between hosting on infrastructure like AWS and setting up
containers with colo or closer-to-home managed hosting. I don't
understand the subtleties here; can anyone enlighten me about
why they might go with Docker in one case but be quite strongly
against it in the other?
icebraining - 3 hours ago
I think that essentially, the LXD project focuses on making
VM-like containers (with a full OS that you SSH into, etc),
whereas Docker and friends focus on immutable application
containers - essentially, a zone for running a single
application, and which you delete and start from scratch
rather than entering and changing stuff.In fact, the LXD
people have a guide on how to run Docker containers inside
LXD :)https://stgraber.org/2016/04/13/lxd-2-0-docker-in-
lxd-712/
Silhouette - 2 hours ago
I understand that the focus/brand of Docker is aimed at a
slightly different scenario.What I'm not seeing is why --
from an objective, technical point of view -- you couldn't
do almost any of the same things with plain LXC these days,
perhaps with LXD on top if the vanilla UI for setting
things up isn't sufficient.I mean, Docker Hub and some nice
UI tools are great and all, but I don't see a USP or a
defensible position worth a billion dollar valuation in
there. So what is really behind the confidence that
investors at this level must surely have?
eatmyshorts - 1 hours ago
Yes, you can do this with most other container systems.
But people haven't done this, except with Docker. Docker
has a significant lead on competitors. People have
little reason to adopt a competitor as a result.
Silhouette - 34 minutes ago
OK, but presumably most of those people also aren't
paying anything to Docker-the-business for using it right
now. If they want to be worth billions, there will have
to be real revenues sooner or later, and at that point
whoever is paying does have an incentive to look at
competitors. So at the risk of repeating myself, I'm
still wondering what they have as a USP or other
effective barrier to competitors stealing away their
market share as fast as they built it up. Surely there
must be something if investors are actually going in at
this level, and I'm very curious to know what it might
be.
[deleted]
mbesto - 2 hours ago
Simple - VMWare is valued at $40B. Someone is going to make the
argument that Docker is the biggest threat to VMWares business
and VMWare will likely just buy them to keep shareholders happy.
yc20017 - 1 hours ago
Except that VMware never open sourced VMkernel. That massively
contributed to the lack of feature parity between VMware and
Microsoft (I'd say VMware was 1-2 years ahead of MSFT). That
helps you secure the market in the early years, and what you
see as the valuation. Docker Inc is not similar to VMware in
that regard.
manmal - 1 hours ago
Docker does not use VMWare, so it's a threat - it's that
simple.
wbl - 1 hours ago
Everyone already has containers and had for decades now.
Why didn't they buy Sun then?
sah2ed - 25 minutes ago
If by everyone you meant UNIX-derived OSes, it's arguable
that they all had containers in one form or another with
the exception of Linux which only changed when cgroups
[0] were added to the v2.6.24 kernel ~10years ago.I think
that addition to Linux paved the way for containers to
enjoy wider adoption than was previously possible with
other less popular container tech in OSes like Solaris
(zones) or BSD (jails).[0]
https://en.wikipedia.org/wiki/Cgroups
TallGuyShort - 1 hours ago
Perhaps they eventually would have if the company who
currently also owns VirtualBox hadn't done so first.
wtvanhest - 4 hours ago
Valuations in private funding are made up numbers. The main
difference between public equity and private equity is
liquidation preferences and interest. I dont know the deal
terms, but it is likely that the investors expect to get their
money back, plus some interest with the potential for more upside
if there happens to be a big exit.They are likely not counting on
it as their returns are nearly guarenteed through the liquidation
pref plus interest.Employees of docker just saw their chances of
a big monetaty exit cut dramatically with this funding round
since they are in last place.It is an area that yc continues to
be silent on and is a travisty of the start up world today.
georgeecollins - 3 hours ago
I am normally a huge cynic when it comes to software valuations
but I think GitHub is likely to be very valuable. First, it's a
service that I and developer's gladly pay for. Second, the
network effects are very powerful. When GitHub becomes the place
to find and store repos that you want to share, there is a lot of
inertia to that. And it becomes a powerful way for developers to
find each other.
mathattack - 4 hours ago
A couple thoughts:1 - As has been mentioned before, it's not
really 1.3 Billion. If they have liquidation preferences, the
value is much less.2 - If they come in very late, growth
investors may be ok with 3X or 5X.
ahallock - 1 hours ago
Docker still has a long way to go in terms of local development
ergonomics. Recently, I finally had my chance to on board a bunch
of new devs and have them create their local environment using
Docker Compose (we're working on a pretty standard Rails
application).We were able to get the environments set up and the
app running, but the networking is so slow to be pretty much
unusable. Something is wrong with syncing the FS between docker and
the host OS. We were using the latest Docker for Mac. If the out of
the box experience is this bad, it's unsuitable for local
development. I was actually embarrassed.
aaronds - 1 hours ago
Have you tried https://github.com/EugenMayer/docker-sync ? I've
had slow FS issues with docker on mac. Integrating docker-sync
doesn't take long and should help you out.
ahallock - 1 hours ago
I haven't, but I recommended it to one of the devs and he
claimed it 'crashed' docker. Need more investigation, but we
shouldn't need tools like this; it should work by default.
aaronds - 1 hours ago
Agreed, and to my knowledge this is a known issue (I'm sure
I've seen chatter regarding the problem on the docker
forums/github). Nonetheless, with docker-sync I've been able
to leverage all the benefits of docker-compose whilest
mitigating those slow mounted fs issues, so for now I'm happy
with it. At least you can configure it directly into your
compose set up, so you don't need to spend time explaining it
immediately when onboarding those new developers.
ahallock - 1 hours ago
A few things to note: I wasn't using Docker for Mac, but a custom
VirtualBox setup that took over a month to perfect. When I last
tested Docker for Mac, the networking was really slow then, but I
took a gamble that it had been fixed by now.I will give credit as
to how easy it was to get the app running -- compose makes
everything a snap. It's unfortunate that something is amiss with
volumes/networking.
shykes - 51 minutes ago
Docker for Mac has a default filesystem configuration that offers
maximum consistency at the expense of performance. Unfortunately
in some configurations that can result in "throw computer out the
window" frustration, depending on the I/O pattern of your
project.As of Docker 17.06, 90% of use cases can safely change
the settings and noticeably improve performance. See the
documentation: https://docs.docker.com/docker-for-mac/osxfs-
caching/#perfor...
eldavido - 1 hours ago
I wish people would stop talking about valuation this way,
emphasizing the bullshit headline valuation.The reality is that
(speculating), they probably issued a new class of stock, at
$x/share, and that class of stock has all kinds of rights,
provisions, protections, etc. that the others don't, and may or may
not have any bearing whatsoever on what the other classes of shares
are worth.