Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Pile Of Garbage
May 28, 2007



NihilCredo posted:

I want to create a small webapp for friends & family to use during meetups. Most of the time it will just serve some static content, but once in a while it will need to fire up an external process (a docker container) that could really benefit from some compute oomph.

This is more of a hobby fun project to learn a different webdev stack than something we actually need, so I'd like to run it for free or for literal peanuts.

It looks like I should be able host it all on the Google Cloud free tier. Dockerized webapp in the App Engine (28 hours/day), then upon demand run the compute jobs using Cloud Run (180k vCPU seconds / month), store generated data in Cloud Storage (5GB). I've looked at AWS, Azure, DO, and Heroku and their (permanent) free tiers don't seem to compare. Is there any pitfall I should be aware of?

Seconding what minato has mentioned but for AWS with hosting from an S3 bucket and ECS/Lambda for on-demand compute things. Things are much cheaper when you're not paying for a dedicated computer instance.

Edit: new page so quoting minato

minato posted:

If it's mostly static content then just throw it in a storage bucket to avoid the web server running at all, and use Cloud Run to initiate the occasional compute.

Only pitfall I can think of is to make sure you use a robots.txt to ensure your static site isn't crawled as it will consume bandwidth you don't need. And maybe set up a Budget Alert so you're aware if you start to approach the free tier threshold.

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Pile Of Garbage posted:

Seconding what minato has mentioned but for AWS with hosting from an S3 bucket and ECS/Lambda for on-demand compute things. Things are much cheaper when you're not paying for a dedicated computer instance.

Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one).

12 rats tied together
Sep 7, 2006

At this pricing level you're really looking at whatever API set you prefer working with or that you want to work with because the difference between the difference between $free forever and $0.000163/mo after 1 year is barely worth thinking about at all.

AWS does have pricing alerts for you to set up, it's a CloudWatch feature though which is why you might have missed it.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
As someone who uses All The Clouds, the big 3 mostly have feature parity and come close in costs. However if you're StackOverflowing you'll likely find more AWS solutions than Azure/GCP, but GCP also has pretty good docs so you might not need StackOverflow/tutorials anyway.

AWS does have a budget alert feature under Services --> Billing --> Budgets --> Create Budget. There's also some CloudWatch stuff you can enable to alert you if you go over some bandwidth threshold.

FamDav
Mar 29, 2008

Zorak of Michigan posted:

Why take work home with you?

to be fair almost half those people are only good follows if you want hot takes on tech. very few people are out there giving good advice on twitter because its hard to build a following.

crazypenguin
Mar 9, 2005
nothing witty here, move along

NihilCredo posted:

Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one).

I did a tiny cheapo GCP thing awhile back, and one thing to note is that once your $ credit runs out, one thing that's not free forever is bandwidth.

My little thing gets 6-12 cent bills every month.

So, figure anything cloud will cost a little bit, regardless of free tier.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I see, thanks. It's fine to pay a few cents a month for the occasional request, I just didn't want to get silently moved to a basic $8/month tier for some resource or whatever.

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

We're using Gitlab and are quite happy with it as well.

Sometimes it's got minor bugs that have been left open for 2+ years in favor of adding more enterprise paid features, which I can't really blame them for. None of those have been show-stoppers, just stuff like the build cache not triggering and slowing down builds by a few minutes.

I might consider Gitea if I did not need a built-in CI/CD system or built-in package manager, and/or if I didn't have a beefy server to host it on. Gitlab is a massive resource hog, while Gitea runs on a Pi and feels blazing fast at all times.

Then again, Gitlab is an enterprise product with all that it entails, e.g. I've never had a single issue running a plain `gitlab backup create && apt-get upgrade` after a new release; whereas Gitea is an open-source project that isn't even dogfooding itself yet (is code hosted on Github).

edit: Gitea apparently supports git mirroring (while it's a paid feature in Gitlab) so you can maybe install both with mirrored repos and get a feel for which one you like better.

Gitea and Drone CI worked pretty good at my last job. Simple to install, simple to maintain.

Soricidus
Oct 21, 2010
freedom-hating statist shill

NihilCredo posted:

For now, GCP has the advantage of being free forever instead of 1 year

For google values of forever

Docjowles
Apr 9, 2009

Anyone else's management been freaking out about the new Docker Hub rate limit poo poo? We're trying to figure out if we can basically buy one "Pro" Docker license for our artifact cache to authenticate with and make the problem go away. This seems like the kind of thing a company's TOS usually forbids (buying 1 seat and having 1000 users enjoy the benefits) but I can't see anyplace Docker calls it out as a problem. I've tried contacting their salespeople but for the first time ever, I cannot loving get anyone in sales to talk to me :v: Curious if you all are dealing with this.

I don't give a poo poo about any of the other features of the service, just the unlimited image pulls.

Docjowles fucked around with this message at 15:12 on Nov 6, 2020

Pile Of Garbage
May 28, 2007



NihilCredo posted:

Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one).

Late reply but nothing is free forever. You mentioned that it's a " hobby fun project to learn a different webdev stack" so IMO if you don't already know it AWS is the logical choice because that's what everyone uses. If you're already familiar with AWS and/or don't care about getting familiar with it for work (No criticism, totally understand) then yeah, go GCP.

However if you literally just want the cheapest thing and don't care about any other externalities then yeah, GCP lookin hot af right now.

Docjowles posted:

Anyone else's management been freaking out about the new Docker Hub rate limit poo poo? We're trying to figure out if we can basically buy one "Pro" Docker license for our artifact cache to authenticate with and make the problem go away. This seems like the kind of thing a company's TOS usually forbids (buying 1 seat and having 1000 users enjoy the benefits) but I can't see anyplace Docker calls it out as a problem. I've tried contacting their salespeople but for the first time ever, I cannot loving get anyone in sales to talk to me :v: Curious if you all are dealing with this.

I don't give a poo poo about any of the other features of the service, just the unlimited image pulls.

If you're running GitLab Omnibus or maybe even hosted it can serve as a registry. Otherwise AWS ECR or some equivalent thing. Either way this is a fun example of management being complacent with a "free" solution and then getting thoroughly owned (Assuming you have some paper-trail telling them they shouldn't be relying on the free service).

Edit: vvv yeah sorry my bad, I should have read your post instead of posting dumb poo poo! vvv

Pile Of Garbage fucked around with this message at 16:21 on Nov 6, 2020

Docjowles
Apr 9, 2009

That doesn't answer my question. We pay for Artifactory and use it as an image cache, but Artifactory itself is hitting the rate limit at times proxying requests to Docker Hub. I'm just trying to figure out if I can buy a single $5/mo Docker subscription and plug those credentials into Artifactory, or if I need to buy one for every engineer we employ. Because usually enterprise licensing is as money grubbing as possible.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Docjowles posted:

We pay for Artifactory and use it as an image cache, but Artifactory itself is hitting the rate limit at times proxying requests to Docker Hub.

If it's a cache, why is it hitting image pull rate limits? It should be requesting each image at most once, and after that just check if a new image for that tag has been published (which isn't affected by the rate limit), and serve from cache otherwise.

Surely you aren't downloading more than 200 never-downloaded-before images from dockerhub every six hours?

Hughlander
May 11, 2005

I thought all the cool kids were moving to githubs registry now anyway

JHVH-1
Jun 28, 2002
My stuff is in ECS and AWS so I can just push the images to ECR and it will only need to pull when there are updates to it.

They announced they will have free public images in ECR, I suspect as a response and to also get people using ECR

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Imagine not vendoring third party dependencies in TYOOL2020

Methanar
Sep 26, 2013

by the sex ghost
the only acceptable package management system is git clone

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Methanar posted:

the only acceptable package management system is git clone

Ok Rob Pike

mr_package
Jun 13, 2000
I'm in a mixed environment, building Win/Mac software with some services deployed on Linux. What is the best pets -> cows option available to me currently?

We are using VSphere already so spinning up VMs on demand for whichever version is being built (old versions = old XCode) may be the best of what's out there right now. Orka uses MacOS in Docker in MacOS so that K8s can orchestrate/manage. Is it feasible to roll your own version of this (MacStation doesn't offer it standalone, it's a service offering for their cloud build product). Is there anything in the Ansible/Salt/OpenStack/whatever world that is better suited?

This is also test project for larger deployment(s) but they will be pure Linux so maybe I will have to accept the build servers and production services will not use the same solution.

Edit: some other interesting options on Mac are Nix, Xcode Server, and Mac Sandbox. But I don't see any orchestration/ci tools leveraging these yet.

mr_package fucked around with this message at 19:47 on Nov 9, 2020

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
We have a bunch of mac minis in one of our colos. Other solutions were very brittle and/or lagged behind the latest versions of what our devs needed.

Our mac solution consists of KVM over IP and a bunch of ansible scripts to reset/reformat machines on a periodic basis.

xzzy
Mar 5, 2009

My solution was make the Mac support group run some Mac mini's for me and the extent of my involvement is telling Jenkins to ssh to them.

Hughlander
May 11, 2005

Hadn't seen Orka before that looks neat. Previously we'd split between on prem minis and macstadium minis.

Another place was large enough that went to Apple to get them to agree to let us run Hackintosh on VMs as long as we had the same amount of hardware. That was cool because we'd use a jenkins slave on demand. Job comes in spin up the hackintosh, run the job, kill the VM.

Hadlock
Nov 9, 2004

I am too tired to think about how to do this

Gmail will only forward mail to one account. I've got a "shared" email account that I want to forward to my email and hers automatically, within 5 minutes.

What's the best way to setup some serverless solution on reading a Gmail inbox and then forwarding every new email (including attachments) to two other Gmail accounts

AWS or GCP is fine, and yeah I already have domains registered to both

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Are you sure you couldn't do that with filters, or setting up a Google Group?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Hadlock posted:

I am too tired to think about how to do this

Gmail will only forward mail to one account. I've got a "shared" email account that I want to forward to my email and hers automatically, within 5 minutes.

What's the best way to setup some serverless solution on reading a Gmail inbox and then forwarding every new email (including attachments) to two other Gmail accounts

AWS or GCP is fine, and yeah I already have domains registered to both

Why does it have to be a gmail account? You can just do this with SES. One of my email addresses only exists in SES and all it does is forward to my gmail via serverless job.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
New Relic/Data Dog/splunk... What would y'all goons be using greenfield for a full stack monitoring solution? Bonus if we can get some amount of SEIM functionality out of it

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Latest Elasticsearch stack release is pretty compelling with the release of the agent greatly simplifying configuration compared to the past jumble of YAML and cursing, although Painless was named ironically I swear and is still necessary sometimes

Disclosure: employed at Elastic, but I'd have been saying to take a peek even if I wasn't. The hosted offerings have gotten significantly better in the past few years, too

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Gyshall posted:

New Relic/Data Dog/splunk... What would y'all goons be using greenfield for a full stack monitoring solution? Bonus if we can get some amount of SEIM functionality out of it

Honestly, my experience has been it doesn't really matter which provider you use it's all about the quality of data you're putting into it: if your logs suck and you don't clearly label your metrics then you'll have a nightmare ever trying to do anything meaningful with them. This becomes even more true when you start adding APM and application traces to the mix.

New Relic's new interface really sucks and I hate interacting with it, especially during incident response. Data Dog seems pretty good for logs and infra metrics but I've got very limited experience with it. SumoLogic's got by far my favourite interface for working with logs but I've had basically no exposure to their metric stuff.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I’d lean towards elasticsearch and if in AWS their fork while troublesome as an example of stealing is a more compelling story to me than cloudwatch.


NewRelic is really good at app performance, I’ve never been impressed with anything else they do. The infra monitoring tool is not super useful to me.


Datadog is way better then NewRelic at the infra side, and logging side, but it’s expensive as all hell, and they charge outrageous sums for “custom” metrics.

But here’s the thing, I mostly key off of app health which is covered by NewRelic, and wide scale system health is surfaced via the prom stack for me. It also allows me to gather whatever the hell metrics I want and really dive deep on issues. If I tried to send that to a manager provider I’d be looking at 100k a month as opposed to 10k.

I also don’t care about individual node health, as K8s catches the most egregious things out of the box, and health checks on individual apps catch the rest.

My ideal setup is Prometheus forwarding to a central cortex, which then has grafana connected to it, but that’s not for the faint of heart.

I’ve heard good things re victormetrics as well as a remote write target.

I’m of course biased with systems that are designed to be able to do those sorts of things.

If I was just monitoring whatever, my old standby is sensu. I guess datadog is ok? Not sure it has siem support out of the box though.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
something about the name "data dog" rubs me the wrong way and I can't get over it

Hughlander
May 11, 2005

my homie dhall posted:

something about the name "data dog" rubs me the wrong way and I can't get over it

Is it because it's noun noun instead of adjective noun like god intended?

Methanar
Sep 26, 2013

by the sex ghost

freeasinbeer posted:

My ideal setup is Prometheus forwarding to a central cortex, which then has grafana connected to it, but that’s not for the faint of heart.

This is what we do except thanos. I'm told we have the world's largest thanos footprint :)

And it's several million dollars a year cheaper than the datadog it replaced lmao.


freeasinbeer posted:

If I was just monitoring whatever, my old standby is sensu. I guess datadog is ok? Not sure it has siem support out of the box though.

The cool thing about sensu is having to start 10 ruby VMs every minute on every host

LochNessMonster
Feb 3, 2005

I need about three fitty


New Relics APM is very nice, infra monitoring pretty crappy.

Datadogs customer support blows big time. Also not that much of a fan of their tooling in general.

I really like the way Elastic is moving forward. Especially now they’re working on a unified agent so I don’t have to deploy metric, file, log, audit beats seperately. Currently using it as log and metrics platform. Might incorporate APM in the near future as well.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
If I was starting out now I’d pick either InfluxDB or Prometheus for ops team usage and pair it with an Elasticsearch stack rolled up for lower resolution and downstream eventing with integrations for ML folks that work with their preferred tooling. Elastic acknowledges its limitations for a high resolution, high dimensional set of metrics that is best done by specialized tools and there’s efforts to make it more efficient and to work out of the box like modern SaaS based solutions. It’s pretty amazing how fast I got monitoring working recently with an Elastic agent compared to configuring Splunk, New Relic, and an endpoint monitoring system at a previous job. You get all three of those now (and then some) in one agent that’s developed in the open. Custom metrics means writing a Metricbeat module but I made it work well enough.

I definitely have some gripes with Kibana still with how log filtering and querying ad box works compared to what I got from Data Dog or Sumologic, and without tuning the backend it takes a while to get query responses for long time windows (30+ days at only 30s intervals). But what I see is that a lot of companies want to consolidate things down and when push comes to shove no ML person or data scientist wants to work with these proprietary monitoring systems.

Alerting in Elasticsearch’s ecosystem kinda sucks for ops people IMO but it’s gotten better recently at least. Still kind of rudimentary. But this is why I’m also for a heterogeneous stack still

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
We use data dog for metrics and monitoring and it's fine I guess. Lot fewer things to configure and keep running versus most other comparable metrics platforms. If I had to solve logging I'd go with a managed elastic setup 'cause right now we dogfood our siem product's log management tool and it's not great for application logging.

xzzy
Mar 5, 2009

Anyone tried out Loki and have impressions? I've been wanting to, but my department is still content with old school text files and rsyslog so it's hard to build a case to get them to consider something newer.. but if I can set up something super awesome maybe I can drag them into this century.

Assuming the tool is any good that is.

I've been nothing but impressed with Prometheus, the number of metrics it can handle on cheap hardware is pretty amazing. To be fair my install replaced a ganglia setup which had a long history of obliterating hard drives so pretty much anything would seem great. But I haven't been able to break Prometheus yet.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

xzzy posted:

Anyone tried out Loki and have impressions? I've been wanting to, but my department is still content with old school text files and rsyslog so it's hard to build a case to get them to consider something newer.. but if I can set up something super awesome maybe I can drag them into this century.

Assuming the tool is any good that is.

I've been nothing but impressed with Prometheus, the number of metrics it can handle on cheap hardware is pretty amazing. To be fair my install replaced a ganglia setup which had a long history of obliterating hard drives so pretty much anything would seem great. But I haven't been able to break Prometheus yet.

Loki is “ok”, it’s basically cortex though and I’d complicated as hell to run.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
There's worse options than Loki if your logs' access pattern is write-once-read-maybe, and much better ones if it isn't. The thing I like about it is that it's one of the lightest-weight log aggregators you can run on a desktop Kubernetes cluster or something

Hadlock
Nov 9, 2004

Prometheus grafana is dead easy to get rolling, particularly if you have a k8s cluster to deploy it to, although really prometheus will just boot with sane defaults off a single binary

Grafana isn't much harder to get going

For logging, if you have the budget for it, just do hosted splunk and move on with your life

Loki is cool but fairly new. Too new for prod in my opinion, but when we use it in develop it's fantastic

Adbot
ADBOT LOVES YOU

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Vulture Culture posted:

There's worse options than Loki if your logs' access pattern is write-once-read-maybe, and much better ones if it isn't. The thing I like about it is that it's one of the lightest-weight log aggregators you can run on a desktop Kubernetes cluster or something

What else do you consider good for write once read maybe? Logging costs are a thing at my current level place and would love some trip reports.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply