HN Gopher Feed (2017-11-29) - page 1 of 10 ___________________________________________________________________
AWS Fargate - Run Containers Without Managing Infrastructure
385 points by moritzplassnig
https://aws.amazon.com/blogs/aws/aws-fargate/___________________________________________________________________
eropple - 5 hours ago
Fargate looks really expensive compared to just running an EC2
instance. A 1 vCPU container with 2GB of RAM will run you
$55/month. An m3.medium with 1 vCPU and 3.75GB of RAM is $49. The
prices seen to get uncomfortably worse from there, though I haven't
priced them out the whole way, but a 4 vCPU container with 8GB of
RAM is price-competitive ($222 for the container, $227 monthly for
the machine) with a freaking i3.xlarge, with 4 vCPUs and 30.5GB,
and the i3.xlarge also has 10Gbit networking. Topping Fargate out
at 4 vCPUs and 30GB of RAM puts it right between an r4.2xlarge and
an i3.2xlarge, both with 8 vCPUs and 61GB of RAM (the i3 is more
expensive because it's also got 1.9TB of local SSD).Enough people
are still trying to make fetch happen, where fetch is container
orchestration, that I expect that fetch will indeed eventually
happen, but this is a large vig for getting around not-a-lot-of-
management-work (because the competition isn't Kubernetes, which is
the bin packing problem returned to eat your wallet, it's EC2
instances, and there is a little management work but not much and
it scales).If you have decided that you want to undertake the bin
packing problem, AWS's ECS or the new Elastic Kubernetes Service
makes some sense; you're paying EC2 prices, plus a small management
fee (I think). I don't understand Fargate at all.
mark242 - 5 hours ago
Yes. If you're using Elastic Beanstalk, or Cloudformation with
autoscaling, Fargate seems to be an incredible waste of money.
Maybe if you have an extremely small workload that doesn't need a
lot of resources running, I could see it, but at that point you'd
be better off with Lambda instead?
NathanKP - 5 hours ago
AWS employee here. Just want to say that we actually had a typo
in the per second pricing on launch. The actual pricing is:
$0.0506 per CPU per hour $0.0127 per GB of memory per hour
Fargate is definitely more expensive than running and operating
an EC2 instance yourself, but for many companies the amount that
is saved by needing to spend less engineer time on devops will
make it worth it right now, and as we iterate I expect this
balance to continue to tip. AWS has dropped prices more than 60
times since we started out.
[deleted]
eropple - 3 hours ago
I used the per-hour pricing in my numbers because I assumed the
per-second was wrong, yeah.
derefr - 42 minutes ago
I know AWS services put outsized fees on things that don't
really have marginal costs (e.g. S3 read operations), because
the fees are used to disincentivize non-idiomatic use-cases
(e.g. treating S3 as a database by scanning objects-as-
keys.)Under this economic-incentive-system lens, I'm curious
whether new AWS services might also be intentionally started
out with high premiums, as a sort of economically-self-limited
soft launch. Only the customers with the biggest need will be
willing to pay to use the service at first, and their need
means they're also willing to put up with you while you work
out the initial kinks. As you gain more confidence in the
service's stability at scale and matching to varied customer
demands, you'd then lower the price-point of the service to
match your actual evaluation of a market-viable price, to "open
the floodgates" to the less-in-need clients.
nerpderp83 - 31 minutes ago
Well if something needs more burn in time, the last thing you
want to do is get thousands or millions of customers. The
reliability of a Timex or Toyota has to be high.
talove - 1 hours ago
Just wanted to share a perspecticve:I think it's a misnomer to
call it expensive or compare using a more abstracted service like
Fargate vs something more granular like EC2.If I need a service
that lets me prototype, build a product, be first to market, etc.
splitting hairs over compute costs seems moot. Not to say it
isn't interesting to see the difference in pricing or how AWS
quantified how to set pricing of the service.FWIW, if you watched
the Keynote stream, the theme of it was literally "Everything is
Everything" and a bunch of catch-phrases basically meaning
they've got a tool for every developer in any situation.One other
note: From my experience I'd also argue it's often times easier
to migrate from fully-managed to more-self managed services than
the other way around. By nature of owning more operations, you
make more decisions about how it operates. Those turn into the
pain-points of any migration project.
djsumdog - 1 hours ago
But does this lock you in to Amazon?Trying to run a
DCOS/marathon or K8s cluster is not trivial. Last time I
looked, every service out there basically spun up a Docker
machine with some auto-magic cert generation.Surely there are
other services out there which will just run containers,
allowing you to deploy from a k8s or marathon template? What
are the other options?
[deleted]
[deleted]
[deleted]
einrealist - 3 hours ago
Can someone do the math and comare it to Cloud Foundry /
OpenShift solutions? That AWS offering seems to be a step into
this part of the market.
srdev - 5 hours ago
Can you elaborate on what the bin packing problem is?
andrewstuart2 - 5 hours ago
It's multi-machine scheduling, basically. Given N resources and
M consumers, how can I fit all M consumers most efficiently,
while using the minimum N?The bin metaphor is that you imagine
one or several bins on the floor, and a bunch of things to
place in bins. Binpacking is just playing tetris to make sure
all your things are packed into as few bins as possible,
because bins cost
money.https://en.wikipedia.org/wiki/Bin_packing_problem
manigandham - 4 hours ago
Running load efficiently on a given resource. Most VMs running
a single app are under-utilized so it's more efficient to pack
apps into containers and run them across a smaller pool of
servers so that they all get the necessary resources without
waste.Kubernetes does really well with this, although ease of
deployment using config files and the abstraction from
underlying vms/servers is probably more useful for most
companies.
eropple - 2 hours ago
Kubernetes emphatically does not do better at resource
utilization than not using Kubernetes. You should figure on
between ten and twenty percent of wastage per k8s node, plus
the costs of your management servers, in a safely provisioned
environment.You can argue about the configuration-based
deployment being worth it--I disagree, because, frankly, Chef
Zero is just not that complicated--but it's more expensive in
every use case I have seem in the wild (barring ones where
instances were unwisely provisioned in the first place).
manigandham - 30 minutes ago
Based on what evidence? We can put hundreds of customer
apps into a few servers and have them deployed and updated
easily. We could try to manage this ourselves but it's much
less efficient while costing much more effort. GKE also
costs nothing for a master and there is no
overhead.K8S/docker also makes it easy to avoid all the
systemd/init issues and just use a standard interface with
declarative configs and fast deployments that are
automatically and dynamically managed while nodes come and
go. We have preemptible instances with fast local storage
and cheap pricing that maintain 100s of apps. K8S also
easily manages any clustered software, regardless of native
capabilities, along with easy networking.Why would I use
chef for that - if it can even do all that in the first
place?
barrkel - 5 hours ago
https://en.wikipedia.org/wiki/Bin_packing_problemIf you have a
bunch of jobs and you need to run them efficiently on a bunch
of compute, you need to be careful not to oversubscribe the
hardware, especially wrt memory. There's an isomorphism between
running differently sized jobs concurrently on a compute
resources, and the bin packing problem. It's a scheduling
problem.
[deleted]
eropple - 5 hours ago
Kubernetes requires machines big enough to run all your
containers. Those machines are the bins. Your containers are
the packages. Fitting your containers in such that there is no
criticality overlap (in AWS, that all instances of service X
are spread across machines in different AZs) and that there is
room for immediate scaling/emergency fault recovery (headroom
on the machines running your containers) gets expensive. You're
buying big and running little, and that comes with
costs.Meanwhile, in AWS, you already have pre-sized blobs of
RAM and compute. They're called EC2 instances. And then AWS
pays the cost of the extra inventory, not you. (To forestall
the usual, "overhead" of a Linux OS is like fifty megs these
days, it's not something I'd worry about--most of the folks I
know who have gone down the container-fleet road have bins that
are consistently around 20% empty, and that does add up.)You
may be the one percent of companies for whom immediate rollout,
rather than 200-second rollout, is important, and for those
companies a solution like Kubernetes or Mesos can make a lot of
sense. Most aren't, and I think that they would be better
served, in most cases, with a CloudFormation template, an
autoscaling group with a cloud-init script to launch one
container (if not chef-zero or whatever, that's my go-to but
I'm also a devops guy by trade), and a Route 53 record.You're
basically paying overhead for the privilege of `kubectl` that,
personally, I don't think is really that useful in a cloud
environment. (I think it makes a lot of sense on-prem, where
you've already bought the hardware and the alternatives are
something like vSphere or the ongoing tire fire that is
OpenStack.)
andrewstuart2 - 5 hours ago
I know you're answering the question of bin-packing, but
after two years of experience with it, I can say that for me,
bin-packing is one of the smallest benefits (though it sells
very well with management), though perhaps a baseline
requirement these days. The real benefits, in my experience,
stem from the declarative nature of cluster management, and
the automation of doing the sensible thing to enact changes
to that declarative desired state.
ttul - 4 hours ago
Immutability, in other words.
eropple - 5 hours ago
Sure. CloudFormation exists for that, though, and both its
difficulty and its complexity are way overstated while also
letting you manage AWS resources on top of that.And it
doesn't cost anything to use.
owaislone - 2 hours ago
And so does terraform which is pretty awesome!
eropple - 2 hours ago
Terraform requires significant infrastructure to get the
same state management and implicit on-device access that
CloudFormation's metadata service does. A common pattern
in systems I oversee or consult on is to use
CloudFormation's metadata service (which is not the EC2
metadata service, to be clear) to feed Ansible facts or
chef-zero attributes in order to have bootstrapped
systems that do not rely upon having a Tower or Chef
Server in my environment.The Terraform domain spec is not
sufficiently expressive (just look at the circumlocutions
you need to not create something in one environment
versus another). It's way too hard to build large
modules, but the lack of decent scoping makes assembling
many small modules difficult too. Worse, the domain spec
is also requires HCL, which is awful, or JSON, which is a
return to the same problem that cfer solves for
CloudFormation. One of my first attempts at a nontrivial
open-source project was Terraframe[1], a Ruby DSL for
Terraform; I abandoned it out of frustration when it
became evident that Terraform's JSON parsing was
untested, broken, and unusable in practice. Out of that
frustration grew my own early CloudFormation prototypes,
which my friend Sean did better with cfer.If you're
looking for an alternative to CloudFormation, I generally
recommend BOSH[2], as it solves problems without
introducing new ones. Saying the same for Terraform is a
stretch.[1] - https://github.com/eropple/terraframe[2] -
https://github.com/cloudfoundry/bosh
andrewstuart2 - 5 hours ago
Eh, there are a lot of terrible things I'd rather put
myself through than writing another CloudFormation
template for any sort of complex infrastructure. It could
have been made easier and more readable if my company had
allowed the use of something like Monsanto's generator
[1], but creating ASTs in JSON is not my idea of a good
user experience.[1] https://github.com/MonsantoCo
/cloudformation-template-genera...
wahnfrieden - 1 hours ago
If your only experience with CloudFormation is hand-
written JSON, it's worth another look.We used to use
troposphere, a Python library for generating
CloudFormation templates, but have since switched back to
vanilla CloudFormation templates now that they added
support for YAML. We're finding it's much nicer to read
and write in plain YAML. We're also now using Sceptre for
some advanced use cases (templatizing the templates, and
fancier deployment automation).
eropple - 3 hours ago
I maintain auster[1] and am a contributor to cfer[2] for
exactly that purpose. ;) CloudFormation really isn't a
rough time anymore, IMO.[1] -
https://github.com/eropple/auster[2] -
https://github.com/seanedwards/cfer
[deleted]
dvlsg - 2 hours ago
If you know those tools exist, maybe. I just put together
a new project using cloudformation (technically
serverless, but it turned into 90 percent cloudformation
syntax anyways), and it was pretty rough.
eropple - 2 hours ago
Maybe it's just me, but as a programmer the first thing I
ever asked when looking at the wall of CloudFormation
JSON was "so how do we make this not suck?".Our job is
not just to automate servers, it's to automate processes,
including stupid developer-facing ones.
jrochkind1 - 45 minutes ago
True, but as a _programmer_, working on a _new to me
platform or package_, I am _very_ reluctant to add an
extra third-party abstraction layer which requires it's
own evaluation of quality and stability and some learning
curve. It's gotta be pretty clear to me that it really
is "what everyone else is doing", or I've gotta get more
experience with the underlying thing to be able to judge
for myself better.I've definitely been burned many times
by adding an extra tool or layer meant to make things
easier, that ends up not, for all manner of reasons. I
think we all have.
andrewstuart2 - 2 hours ago
You're not wrong. But in my case, that had been basically
forbidden as an option, essentially because it "wasn't
supported by Amazon," and because there's just additional
risk to non-standard approaches. AWS certifications cover
CloudFormation, so you can hire for that with low risk
pretty easily. Other nonstandard utilities, not so much.
rockostrich - 5 hours ago
Fargate seems like it's an in-between of Lambda and ECS. Lambda
because it's pay-per-second on-demand functions being run (or in
the case of Fargate, containers) and ECS because Fargate is ECS
without having to worry about having the EC2 instances
configured. I'm not sure where this falls in, but maybe
developers were complaining about Lambda and wanted to just run
containers instead of individual functions?
mullen - 5 hours ago
It's Lambda with out the 5 minute limit.
lindydonna - 5 hours ago
Plus the ability to do custom containers. For some workloads,
may be valuable.
eropple - 5 hours ago
I assume as much; my contention is that that's not gonna really
be worth it even to the people who think they want it. Not at
this price.
[deleted]
dastbe - 3 hours ago
The way I think about it is temporal and spatial control, and
giving up control over them so that some common entity can
optimize and drive down your costs. With Fargate, you're giving
up spatial control so you can just pay for the task resources
you asked for. With Lambda, you're additionally giving up
temporal control so you can just pay for resources when you're
lambda is actually servicing a request.When I think about the
offerings this way, I can start to decide when I want to use
them because now I can ask myself "Do I need strict
temporal/spatial control over my application?" and "Do I think
I can optimize temporal/spatial costs better than
Lambda/Fargate?".
NathanKP - 5 hours ago
Lambda has some limitations such as cold starts, 5 min max
execution time, etc because it is designed for a much more
granular operational model. Fargate is designed to run long
running containers that could stay up for days or weeks, and
always stay warm to respond to requests, so there is no cold
start.
[deleted]
vhold - 5 hours ago
Maybe it's precisely only billing for cpu and memory consumed, so
if your workload has a small footprint and is mostly waiting
around a lot for other services to respond it would be really
cheap?
colemorrison - 4 hours ago
nah, I asked, it's for the amount reserved, not the amount
used.
https://twitter.com/nathankpeck/status/935930795864211461
axelfontaine - 5 hours ago
Yes, the pricing seems off by 3 orders of magnitude! 2734 USD for
a month of t2.micro -like capacity!
Unbelievable!https://aws.amazon.com/fargate/pricing/
zedpm - 5 hours ago
Check your math. 1vCPU at $0.0506 per hour + 1 GB RAM at
$0.0127 per hour gives $.0633 per hour. At 750 hours per month,
that's $47.48 per month. A T2.micro is $8.70 per month, but not
even close to a whole vCPU, so it's not a direct
comparison.Edit: I think they have a mistake on the pricing
page: the per-second rate looks more like a per-minute rate.
Doing the calculation with the per-second and again with the
per-hour stated prices gives a 60x difference in monthly
cost.Edit2: Yep, they've now fixed the per-second price; it was
originally 60x the correct price.
Androider - 5 hours ago
A T2 will give even better real-world performance than an M3, at
2x vCPU and 4GB of RAM at ~$34/mo. Unless you're compute bound
the bursting performance of T2s is perfect for web services. Add
an instance reservation, with no upfront cost, and you're looking
at ~$20/mo for a t2.medium. Using Fargate will reduce your
instance management overhead, but not worth it at over 2x the
price, at least for me.I'd rather have two T2's than one M4 for
most web services, or 8 T2's over 4 M4's etc. for both better
performance, reliability and price. T2's are the best bang for
your buck by far, as long as your service scales horizontally.
eropple - 5 hours ago
You're completely correct. I was just going price-to-price.
ryeguy - 4 hours ago
No way t2's are a good choice for your average webservice. You
don't get the full CPU with t2's. With a t2.medium if you're
using over 20% of the cpu (40% of 1 vcpu or 20% of both),
you're burning CPU credits. So unless you have a webapp that
uses 8gb of memory yet stays somewhere under 20% cpu
utilization (maybe with some peaks), you'll eventually get
throttled.t2's are for really bursty workloads, which only
makes sense for a webservice if you aren't handling much
traffic in general and you have maybe 2 instances total.
eropple - 3 hours ago
Most web apps are IO bound, not CPU bound, and a throttled T2
has IO to spare versus its CPU usage.
Johnny555 - 3 hours ago
t2's are great until something happens and you really need
the full CPU, then you get throttled and suddenly your
service goes down because it can't keep up with the load.If
you use t2's for anything important, keep an eye on the CPU
credit balance.
cjsuk - 3 hours ago
Just applying selinux policy on CentOS 7 will kill the
CPU credits on a micro instance. Running updates is a
risky businesss.
ryeguy - 1 hours ago
But if you get throttled in the first place that means
you're going to have some kind of performance degradation
since your server was using more than the baseline level.
Web apps being IO bound isn't relevant here, because the
only requirement for issues to arise is for your server to
have consistent 20%+ cpu usage.
jrochkind1 - 50 minutes ago
Well, they're relevant in that a heavily IO bound app is
probably unlikely to use much CPU -- it's too busy
waiting on IO to use 90% of CPU, and maybe does stay
mostly under 20%. Obviously this depends on a lot of
details beyond "IO-bound", but it is not implausible, and
I think does accurately describe many rails apps.
eknkc - 3 hours ago
Just for a data point;We have been serving around 10 million
web requests / day per t2.medium. Not lightweight static
files or small json responses, actual dynamic pages with
database load. Our servers mostly sit around 20% though.Might
not be that much compared to larger web services but we have
a localised audience so nighttime cpu credits get collected
and used during peak times. So it fits our workload. They are
great value when the credit system work in your setting.Have
a grafana dashboard constantly monitoring credits and have
alerts in place though. But haven?t had a sudden issue that
we needed to manually remedy.
wdewind - 3 hours ago
> So unless you have a webapp that uses 8gb of memory yet
stays somewhere under 20% cpu utilization (maybe with some
peaks), you'll eventually get throttled.FWIW this is,
unfortunately, a pretty common description of many low
traffic Rails apps.
ntrepid8 - 5 hours ago
Looking through the pricing page I'm not sure what sort of
workload this would make sense for. Just looking at the examples
from the pricing page I think I'm getting sticker shock.-
https://aws.amazon.com/fargate/pricing/Example 1 > For example,
your service uses 1 ECS Task, running for 10 minutes (600
seconds) every day for a month (30 days) where each ECS Task uses
1 vCPU and 2GB memory. > Total vCPU charges = 1 x 1 x 0.00084333
x 600 x 30 = $15.18 > Total memory charges = 1 x 2 x 0.00021167 x
600 x 30 = $7.62 > Monthly Fargate compute charges = $15.18 +
$7.62 = $22.80So the total cost for 5 hours of running time is
$22.80? Am I even reading this correctly? If so, what would this
be cost effective for?
luhn - 5 hours ago
I think they mislabeled the pricing. If you look at the per-
hour pricing ($0.0506/CPU-hour and $0.0127/GB-hour), that
translates to $0.00084333 and $0.00021167 per minute, which is
a pretty reasonable price. This also makes sense in light of
their recent announcement of per-minute EC2 billing.
ntrepid8 - 5 hours ago
Ah yes, that makes much more sense. Hopefully they will
update the pricing page with the correct values soon :)
dd_says - 5 hours ago
That is correct
ntrepid8 - 5 hours ago
Hopefully this is the correct math for their example 1:(1 * 2
* 0.00021167 * (600/60) * 30) + (1 * 1 * 0.00084333 *
(600/60) * 30) = 0.380001Because that's much better than the
original:(1 * 2 * 0.00021167 * (600) * 30) + (1 * 1 *
0.00084333 * (600) * 30) = 22.80006
NathanKP - 4 hours ago
You are correct, we mislabelled the pricing on launch, it is
corrected now. The correct values are: $0.0506 per CPU per
hour $0.0127 per GB of memory per hour
dzonga - 3 hours ago
looking at the number of Amazon products on the front-page, it's
mind-blowing. Amazon (will) probably have a monopoly on developer
mindshare in the future.
Voloskaya - 3 hours ago
It's not like it's a pattern. Today is AWS re:invent. The same
is true of Google and MS during their respective annual dev
conferences.
antoncohen - 1 hours ago
I wonder how they handle isolation. Linux container technologies
don't normally provide sufficient isolation for multi-tenant
environments, which is why most of the cloud container
orchestrators require you to pre-provision VMs (ECS, GKE).Azure
Container Instances uses Windows Hyper-V Isolation that boots a
highly optimized VM per container, so containers have VM
isolation.Has AWS built a highly optimized VM for running
container?
NathanKP - 1 hours ago
AWS employee here. Isolation is handled at the cluster level.
Apps that are run in the same cluster may be run on the same
underlying infrastructure, but clusters are separated.
nodesocket - 1 hours ago
If I understand this correctly, Fargate is similar to Elastic
Container Service, without having to worry about EC2's instances?
But you also can manage the EC2 instances with Fargate as well?
Seems like AWS has lots of products that overlap and it is
confusing to end users.I'd say this is exactly why Google Cloud is
superior (in my opinion). AWS lacks user experience and KISS
philosophy. Just feels like AWS keeps on bolting things on.
nathankunicki - 1 hours ago
No, Fargate is just a container target, like EC2 is.You manage
the containers with ECS (Or the newly announced Kubernetes
equivalent). They are placed on either EC2 instances or somewhere
inside Fargate.
t1o5 - 4 hours ago
Is it the "AWS Day" or something ? I see 5 AWS related news in the
top !
drdrey - 4 hours ago
AWS re:Invent is happening in Vegas right now
whoisjuan - 4 hours ago
AWS re:Invent is happening this week. A lot of announcements and
product launches.
[deleted]
lclarkmichalek - 5 hours ago
Well that's one of the more nonsensical names to come out of AWS
recently.
parshimers - 3 hours ago
I'm pretty sure someone at Amazon is an Aqua Teen Hunger Force
fan: https://www.youtube.com/watch?v=uOd7HQoKxcUOglethorpe: We
have successfully traveled eons through both space and time
through the Fargate. To get free cable.Emory: I think it's a
s-star gateOglethorpe: Its the Fargate! F, its different from
that movie which I have never seen, so how would I copy it?
[deleted]
lotyrin - 6 hours ago
Is there any plan for Fargate + EKS to be able to support attached
EBS volumes? Please say yes.
[deleted]
bbgm - 6 hours ago
We are super interested in enabling EBS support for Fargate. We
do not have any timelines, but would love to know what your
expectations are and what you would use EBS for.(I run the
containers org at AWS)
lotyrin - 5 hours ago
Goal is to have a developer write up a service definition with
e.g. a web tier, service tier and database tier, wherein some
of those pods might need to have persistent data volumes and
expect EKS be able to run that application for them without my
intervention, even if I were to have something shooting the
underlying compute nodes in the head (but ideally, I won't
even sweat those nodes' existence thanks to Fargate).We'd be
using services like RDS for everything we could, of course, but
sometimes someone insists on persisting something to disk, and
sometimes that strategy makes sense.
nathankunicki - 1 hours ago
I currently run Cassandra inside a container. The data is on an
EBS volume, attached to the instance through CloudFormation at
stack creation time, and mounted through a systemd unit defined
in UserData (Also through CloudFormation). It is then exposed
to the container via a Docker volume mapping specified in the
task definition (Also through CloudFormation!).Would love to
have an extension of the run-task command that specifies an EBS
volume to attach and where to mount it when using Fargate.
elcritch - 3 hours ago
One good example use case would be running distributed NoSQL /
KV stores. Running say a Riak KV cluster or an image caching
and processing service. Both of these would probably be best
using SSD EBS's to be able to bring the data storage and
computing closer together rather than using RDS or similar,
which in certain use cases it can be significantly faster due
to less network calls and latency. As an example setup, Rancher
has a plugin to allow using docker data volumes from EBS
volumes. It handles the naming, attaching the drives to the EC2
instance, etc.
nhumrich - 5 hours ago
yesedit: (I dont actually know, im just saying yes cause you
asked)
bbgm - 5 hours ago
:) and if you ever have questions or feedback we are a tweet
away.
shroom - 3 hours ago
Is there any work in progress to simplify AWS console. With
so much (more) cool stuff being announced AWS feels a bit
overwheling for some, me included. I?m refering to the UI
part mostly and some concepts like policys and user
mangement. Forgive if this is the wrong place to ask... then
Ill try Twitter ;-)
dewyatt - 5 hours ago
I think I have AWS fatigue. I have a few certifications and a few
years of experience working with AWS, but it's getting difficult to
even keep track of all the services.
Florin_Andrei - 3 hours ago
This is a trend that will only accelerate in the future. We've
finally reaching the point where the rate of change is impacting
people's careers. The half-life of useful knowledge keeps
shrinking, and there is no end in sight.
remus - 5 hours ago
Maybe it's just been the last few days, but it feels like every
time I look at hn there's 2 new posts announcing new Aws
services!
lostcolony - 5 hours ago
Re:invent is going on. That's why. They hold big announcements
(like new services) until this week each year.
romanhn - 5 hours ago
It's just the last few days. The big AWS re:Invent conference
is happening this week, with all of the new service
announcements.
notyourday - 4 hours ago
It is really AWS Extract Money From Customer's CFOs service
fenwick67 - 2 hours ago
You forgot "Elastic".
notyourday - 2 hours ago
of course. the more money CFO has the more is extracted! Pure
genius!AWS is a drug dealer.
[deleted]
dpweb - 5 hours ago
Ive been using hyper.sh i really like it. Especially i dont want a
web interface i can pull a container and start up a container from
my command line in 3 seconds. I can pull from docker repo attach
ips and storage all in terminal. How does this compare i want to
stay out of a web mgmt interface.
NathanKP - 5 hours ago
AWS has an API, and a command line application for integrating
with the API. You can (and probably should) use AWS without ever
touching the web management interface.For an easy getting
starting command line experience for ECS I highly recommend this
tool: https://github.com/coldbrewcloud/coldbrew-cli
shroom - 3 hours ago
Wow ?hundreds of millions of new containers started each week?
these are pretty insane numbers. Insane in a very cool and
mindnumbing way that is!
ju-st - 5 hours ago
michaelbuckbee, please update https://www.expeditedssl.com/aws-in-
plain-english
raphaelj - 4 hours ago
How does this compare to Heroku?
dumbfounder - 4 hours ago
Did anyone else notice that 11/15 top stories on HN right now are
Amazon announcements? Crazy.Sorry for the offtopicish post...
Stefan-H - 4 hours ago
Reinvent causes this to happen every year. And to be frank, AWS
announcements generally have major impact on the internet as a
whole, especially on how people do business on it. So, it's
deserved, i would say.Full disclosure: former AWS Employee
dragonwriter - 4 hours ago
Its the AWS developers conference right now. You see the same
effect for other big tech firms during their respective
equivalents.
rifung - 4 hours ago
Isn't that because re:invent is happening now?
sigmonsays - 4 hours ago
They need to chill on posting. They posted 15 posts to the front
page and thats 50% of the headlines... all amazon.Hope everyone
loves amazon!
spydum - 4 hours ago
how is that different than when apple has it's launch days? the
announcements are mostly relevant to the major audiences here,
seems reasonable to think they'd get a surge of upvotes
manigandham - 4 hours ago
Yes, their big annual conference: https://reinvent.awsevents.com/
dumbfounder - 3 hours ago
That would explain it!
kkotak - 5 hours ago
12 stories on the front page of HN leading to amazon.com and their
offerings? Hm....
binaryblitz - 4 hours ago
It's the first day of re:invent and they have a 40%+ market share
of IaaS. Not surprising that it's all over HN.Note: I do not work
for Amazon. :)
euyyn - 4 hours ago
Announcing everything on the same day pays off :)
[deleted]
bdburns - 5 hours ago
(Azure Container Instance engineer here)This looks very similar to
what we launched with Azure Container Instances last summer.The
Azure Container Instances kubernetes connector is open source and
available here:https://github.com/Azure/aci-connector-k8s
christop - 4 hours ago
Yeah, looks very similar. I will be interested to see how quickly
containers can be provisioned on FarGate.Maybe I was doing
something wrong, but my experience so far with ACI is that it
consistently took about three to four minutes until a smallish
container was ready for use.
curiousDog - 5 hours ago
Wow this looks like an exact copy of Azure Container Instances.
azinman2 - 4 hours ago
Or just the general idea of a cloud provider making it easy to
run containers. That?s not exactly a left-field idea.
weberc2 - 1 hours ago
Exactly this. I think this is pretty much the use case most
people envision when they think about a container
orchestration service (it was for me, anyway). My
understanding is that EC2 and friends didn't deliver this on
day 0 because efficient container isolation is hard.
pbecotte - 4 hours ago
Which is an exact copy of the Joyent service from the year
before :)
zenlikethat - 4 hours ago
You mean Triton? I wouldn?t really call it an exact copy if
so. There was a whole Linux syscall translation layer in
there...
elcritch - 3 hours ago
Copy in the sense of the product features, not the product
implementation. Joyent has long provided a "run your
container as a service" which IMHO is the best way for a
small/medium to run container services. The whole create
VM's to run containers creates a lot of extra work. Plus
this could be great for teams doing data analysis, just
spin up 100 containers for 30 seconds type of workloads.The
OP is short on details anyways, does Fargate run on a tuned
xen vm's or do they have linux servers under there (or
maybe they're SmartOS ;) ).
johnnycarcin - 2 hours ago
Came here to post this. To me it shows the gap between Azure and
the non-enterprise world. Azure did this awhile back, as well as
the managed k8s thing, neither of which got much run on
HN.Perhaps Azure needs to work on marketing? Is there a
legitimate reason Azure isn't getting more traction in the non-
enterprise world? I mean that as a totally serious question, not
in a dickish way. Is it because it has the Microsoft name
attached to it or just because AWS has so much traction?As
always, full disclosure that I work at MSFT as well.
7ewis - 1 hours ago
We run AWS, GCP and Azure.Devs in my team can pretty much chose
their favourite cloud to deploy things to. Everyone always
picks AWS, it's just the easiest to navigate and feels like
everything links together well.I think the only things we use
Azure for is the Directory, and Functions to run some
PowerShell.As AWS is the industry standard, I feel that a lot
of people like to stick with what they know too.
adventured - 1 hours ago
> Is it because it has the Microsoft name attached to it or
just because AWS has so much traction?Yes on both counts.Also,
the perception is common that Microsoft = Windows Server, to a
very high degree of bias. Thus, if you don't operate on that
platform, you'd immediately disregard Azure. A lot more work is
needed to convince non-Windows operations & developers to buy
into Microsoft's offerings around Linux. Emphasis that that
says nothing about the quality of existing offerings, rather,
the issue is of perception. The perception is that Linux is and
will always be a secondary concern with Microsoft; and
potentially worse, skepticism over whether Microsoft will
invest into and support Linux over the very long term. If one
buy into that skepticism or doubts Microsoft's commitment to
Linux, AWS is immediately a superior choice as a long-term
platform bet.
beck5 - 6 hours ago
I am getting lost with all the ways to run containers on AWS. Is
this the equivalent of google compute engines beta option to boot
from a docker container?
NathanKP - 5 hours ago
AWS employee here on the ECS team. ECS on Fargate would be the
closest thing to what you are asking for. Upload a container
image, create a Fargate cluster, and launch a service on that
cluster that runs your container.
soccerdave - 5 hours ago
Is this available today? I thought that's what I heard, but
I'm not seeing anything in the AWS console.
NathanKP - 5 hours ago
It's currently available in the us-east-1 region, under the
ECS service in the console. Create a new cluster and Fargate
will be an option for launching and operating the cluster.
soccerdave - 5 hours ago
Thanks! I must have missed that bit about us-east-1 only
brazzledazzle - 3 hours ago
It's frustrating how hit or miss their service
availability is in each region. I can understand other
countries with different laws and regulations but they
can't even get some services multi-region in the US.
stuaxo - 5 hours ago
As a contractor, I come into places and use stuff for about 6
months then move onto the next place with a different setup.The
Amazon stuff is especially confusing, it seems they have
reinvented just about everything with their own jargon, it really
doesn't help.
bryanh - 6 hours ago
Everyone at Zapier was hoping for AWS managed Kubernetes.Edit:
Maybe we'll get it!
https://twitter.com/AWSreInvent/status/935909627224506368
nathankunicki - 6 hours ago
They announced that too, "AWS Elastic Container Service for
Kubernetes", or EKS as they call it.This is different, this is
where your containers run, not for managing the containers.You
can use either ECS or EKS for scheduling containers on Fargate,
the same as scheduling them on EC2 hardware.
100pctremote - 6 hours ago
Fargate is complementary to the just-announced Elastic Container
Service for Kubernetes (EKS)
kasperni - 6 hours ago
It is there as well https://aws.amazon.com/eks/
tootie - 6 hours ago
Guess I should learn about Kubernetes now.
lindydonna - 5 hours ago
I found the talks by Brendan Burns to be very good for a
high-level overview.
[deleted]
sigmonsays - 4 hours ago
sigh, amazon is taking over hacker news. They need to chill on
posting. They posted 15 posts to the front page and thats 50% of
the headlines... all amazon.Hope everyone loves amazon!
chrismartin - 4 hours ago
Is "Fargate" an Aqua Teen Hunger Force reference?
https://youtu.be/uOd7HQoKxcU?t=38
therockspush - 4 hours ago
Probably. first thing I thought.
swivelmaster - 3 hours ago
Yeah, as soon as I saw "Fargate" I thought, "Is Amazon really
naming a product after a silly reference from an episode of
ATHF?"I'm not sure if I should be surprised or not.
zippergz - 1 minutes ago
I think they ran out of sensible names a long time ago.
gldalmaso - 5 hours ago
I love AWS and their pace of innovation, but some areas are really
lagging behind.Two new container services announced but S?o Paulo
still doesn't even have ECS which was announced in 2014.
gtaylor - 3 hours ago
This is one of a few signals that may suggest ECS may not figure
prominently in AWS future strategy.
politician - 1 hours ago
That's an understatement! We've been watching ecs-agent
development stagnate for the past 6 months until just a couple
of weeks ago.ECS has been on death's doorstep while AWS has
been pushing the Lambda strategy. My guess is that their
numbers show a slowdown in Lambda uptake due to the problems
with Lambda, so they're now moving over to this Fargate
platform and ECS is getting a few dribbles of dev time as a
consequence.I think they need to get over this
NIH/Rebrand&Relabel syndrome and implement Istio
(https://istio.io/).
mck- - 4 hours ago
A third of the front page of Amazon; what's going on? Did they
release a dozen products in one go? Interesting release strategy to
bulk everything as opposed to spacing it out..
manigandham - 4 hours ago
Yes, their big annual conference: https://reinvent.awsevents.com/
brazzledazzle - 4 hours ago
re:Invent is happening right now.
nhumrich - 6 hours ago
This is a really cool midway point between lambda and ec2. You can
have a large codebase, run continously, but on "serverless"
minhajuddin - 5 hours ago
This is going to be really great for batch jobs which need isolated
environments. I have been waiting for something like this for a
long time. Amazon is really doing work. I'll be definitely be using
this.
rm999 - 4 hours ago
Have you tried AWS Batch? My team moved a couple of our batch
machine learning modeling jobs to it earlier this year and it's
worked out great.https://aws.amazon.com/batch/
minhajuddin - 4 hours ago
I haven't really tried batch. But, from initial reading of
documentation it didn't look like it supported running docker
images. My use case requires running docker images of static
site generators and that sort. Will take another look at it.
dastbe - 2 hours ago
See here for the details on how to define a job, specifically
around running docker images on
ECShttp://docs.aws.amazon.com/batch/latest/userguide/create-
job...
bbgm - 4 hours ago
How would you use Batch + Fargate? Let?s assume Fargate is a
supported compute environment in Batch.(I run the containers
org at AWS. I happen to run Batch as well)
kirillseva - 3 hours ago
My org is looking to move machine learning to batch as the
underlying infra.All I want is to be able to do this: 1)
specify a DAG of tasks. Each task is a docker image, CMD
string, CPU and memory limits 2) hit an API to run it for me.
Each task runs on a new spot instance 3) be able to query
this service about the state of the DAG and of each
individual nodeSounds like if AWS provides an API to create a
batch cluster (or whatever you call it) and lets the tasks be
defined in terms of what docker image to run with what
command you'll satisfy this desire
bbgm - 2 hours ago
That is in line with our vision for Batch; to be the engine
for systems where you essentially describe a DAG and we run
and hyperoptimize the execution for you. We do some of
what you?re asking for but that?s great feedback around
what you?d like to do.
cdnsteve - 29 minutes ago
Would this be a direct competitor to Google Cloud's app engine
flexible? Aka i just upload my docker container?
boyd - 6 hours ago
Notably, this appears to confirm a Kubernetes offering (EKS)! I
will tell you that we plan to support launching containers on
Fargate using Amazon EKS in 2018 [Edit] Looks like that just got
announced too: https://aws.amazon.com/eks/.
NathanKP - 5 hours ago
AWS employee here. You are correct. Fargate is an underlying
technology for running containers without needing to manage
instances, and it will integrate with both the ECS and EKS
container orchestration and scheduling offerings.
[deleted]
azinman2 - 3 hours ago
Do all the containers I launch run in an EC2 VM that?s isolated
for my account? Or does Fargate somehow provide the security
isolation without being a VM?
NathanKP - 3 hours ago
Fargate isolation is at the cluster level. Apps running in
the same cluster may share the underlying infrastructure,
apps running in different clusters won't.
crb - 2 hours ago
Is that infrastructure, EC2 instances?
allengeorge - 5 hours ago
I'm not 100% sure about the relationship between EKS, ECS and
Fargate.Why would I deploy to Fargate over EKS? I assume it's
because with Fargate I don't have to write a k8s deployment
spec?Why would I deploy to Fargate over ECS?Legitimately curious,
and looking for clarification/correction.
NathanKP - 5 hours ago
AWS employee here. You would deploy to Fargate because you don't
want to have to manage the underlying EC2 instances. You can use
Fargate with both ECS and EKS (in 2018).ECS and EKS are just two
different schedulers for orchestrating the containerized services
that you want to run. Fargate is the engine behind them that
executes the containers for you without you needing to worry
about servers.ECS as a scheduler will always integrate much
better with other AWS services. EKS will give you the advantage
of being able to run the same scheduler on an on premise or on
another cloud.
mgalgs - 5 hours ago
I thought EKS was managed? Do you still have to manage the
underlying instances in EKS?
chkal - 3 hours ago
Thanks a lot for the explanation.
nathankunicki - 3 hours ago
Fargate is more analogous to EC2 than ECS or EKS.Fargate is a
placement target for containers, just like EC2 instances in a
cluster would be.You use ECS and EKS to define and schedule
tasks/containers on a placement target.The primary difference
between Fargate and EC2 is that with Fargate you don't need to
manage physical instances and the software stack running on them
(Docker daemon, etc). When you start a task, it runs...somewhere.
In the AWS "cloud".
adrianmacneil - 5 hours ago
With ECS and EKS you get a managed master, then you set up your
own autoscaling groups etc to deploy nodes (which you manage)
into the cluster.With Fargate, you get access to AWS-managed
multi-tenant nodes. So, Fargate connects to either your ECS or
EKS cluster and avoids the need for you to worry about managing
the nodes as well.
bg0 - 3 hours ago
Was hoping this was close to Google's Appengine. Patiently waiting.
CSDude - 5 hours ago
Fargate is a very logical step, I agree Kubernetes is really nice
but very complex for simplistic setups, looking forward to use it,
too bad it's only in N. Virginia
NathanKP - 5 hours ago
We will be steadily rolling Fargate out across other regions
starting in 2018.
bmurphy1976 - 3 hours ago
I haven't had a chance to dig through the documentation yet. Can
we deploy a POD instead of just a container? One of the things we
are struggling with is all the side services that have to go with a
container deployment (i.e. a secure or oauth proxy).
noahm - 1 hours ago
Fargate uses the same task definition abstraction as Amazon ECS.
See
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/l...
So yes, you can launch multiple containers in a single logical
unit.
samprotas - 1 hours ago
For low utilization low cost continuous applications (think a web
socket listener with not much to do) this lowers the entry level
cost below a t2.nano it looks like. That?s a win in my book.