|
NihilCredo posted:I want to create a small webapp for friends & family to use during meetups. Most of the time it will just serve some static content, but once in a while it will need to fire up an external process (a docker container) that could really benefit from some compute oomph. Seconding what minato has mentioned but for AWS with hosting from an S3 bucket and ECS/Lambda for on-demand compute things. Things are much cheaper when you're not paying for a dedicated computer instance. Edit: new page so quoting minato minato posted:If it's mostly static content then just throw it in a storage bucket to avoid the web server running at all, and use Cloud Run to initiate the occasional compute.
|
# ? Nov 2, 2020 20:21 |
|
|
# ? May 15, 2024 03:42 |
|
Pile Of Garbage posted:Seconding what minato has mentioned but for AWS with hosting from an S3 bucket and ECS/Lambda for on-demand compute things. Things are much cheaper when you're not paying for a dedicated computer instance. Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one).
|
# ? Nov 2, 2020 20:48 |
|
At this pricing level you're really looking at whatever API set you prefer working with or that you want to work with because the difference between the difference between $free forever and $0.000163/mo after 1 year is barely worth thinking about at all. AWS does have pricing alerts for you to set up, it's a CloudWatch feature though which is why you might have missed it.
|
# ? Nov 2, 2020 22:36 |
|
As someone who uses All The Clouds, the big 3 mostly have feature parity and come close in costs. However if you're StackOverflowing you'll likely find more AWS solutions than Azure/GCP, but GCP also has pretty good docs so you might not need StackOverflow/tutorials anyway. AWS does have a budget alert feature under Services --> Billing --> Budgets --> Create Budget. There's also some CloudWatch stuff you can enable to alert you if you go over some bandwidth threshold.
|
# ? Nov 2, 2020 22:38 |
|
Zorak of Michigan posted:Why take work home with you? to be fair almost half those people are only good follows if you want hot takes on tech. very few people are out there giving good advice on twitter because its hard to build a following.
|
# ? Nov 3, 2020 02:15 |
|
NihilCredo posted:Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one). I did a tiny cheapo GCP thing awhile back, and one thing to note is that once your $ credit runs out, one thing that's not free forever is bandwidth. My little thing gets 6-12 cent bills every month. So, figure anything cloud will cost a little bit, regardless of free tier.
|
# ? Nov 3, 2020 03:35 |
|
I see, thanks. It's fine to pay a few cents a month for the occasional request, I just didn't want to get silently moved to a basic $8/month tier for some resource or whatever.
|
# ? Nov 3, 2020 07:23 |
|
NihilCredo posted:We're using Gitlab and are quite happy with it as well. Gitea and Drone CI worked pretty good at my last job. Simple to install, simple to maintain.
|
# ? Nov 3, 2020 08:05 |
|
NihilCredo posted:For now, GCP has the advantage of being free forever instead of 1 year For google values of forever
|
# ? Nov 4, 2020 12:21 |
|
Anyone else's management been freaking out about the new Docker Hub rate limit poo poo? We're trying to figure out if we can basically buy one "Pro" Docker license for our artifact cache to authenticate with and make the problem go away. This seems like the kind of thing a company's TOS usually forbids (buying 1 seat and having 1000 users enjoy the benefits) but I can't see anyplace Docker calls it out as a problem. I've tried contacting their salespeople but for the first time ever, I cannot loving get anyone in sales to talk to me Curious if you all are dealing with this. I don't give a poo poo about any of the other features of the service, just the unlimited image pulls. Docjowles fucked around with this message at 15:12 on Nov 6, 2020 |
# ? Nov 6, 2020 15:08 |
|
NihilCredo posted:Any particular reason you recommend AWS S3+ECS over GCP Storage+Run? For now, GCP has the advantage of being free forever instead of 1 year, and having a budget alert feature (last time I checked AWS didn't offer one). Late reply but nothing is free forever. You mentioned that it's a " hobby fun project to learn a different webdev stack" so IMO if you don't already know it AWS is the logical choice because that's what everyone uses. If you're already familiar with AWS and/or don't care about getting familiar with it for work (No criticism, totally understand) then yeah, go GCP. However if you literally just want the cheapest thing and don't care about any other externalities then yeah, GCP lookin hot af right now. Docjowles posted:Anyone else's management been freaking out about the new Docker Hub rate limit poo poo? We're trying to figure out if we can basically buy one "Pro" Docker license for our artifact cache to authenticate with and make the problem go away. This seems like the kind of thing a company's TOS usually forbids (buying 1 seat and having 1000 users enjoy the benefits) but I can't see anyplace Docker calls it out as a problem. I've tried contacting their salespeople but for the first time ever, I cannot loving get anyone in sales to talk to me Curious if you all are dealing with this. If you're running GitLab Omnibus or maybe even hosted it can serve as a registry. Otherwise AWS ECR or some equivalent thing. Either way this is a fun example of management being complacent with a "free" solution and then getting thoroughly owned (Assuming you have some paper-trail telling them they shouldn't be relying on the free service). Edit: vvv yeah sorry my bad, I should have read your post instead of posting dumb poo poo! vvv Pile Of Garbage fucked around with this message at 16:21 on Nov 6, 2020 |
# ? Nov 6, 2020 15:19 |
|
That doesn't answer my question. We pay for Artifactory and use it as an image cache, but Artifactory itself is hitting the rate limit at times proxying requests to Docker Hub. I'm just trying to figure out if I can buy a single $5/mo Docker subscription and plug those credentials into Artifactory, or if I need to buy one for every engineer we employ. Because usually enterprise licensing is as money grubbing as possible.
|
# ? Nov 6, 2020 16:03 |
|
Docjowles posted:We pay for Artifactory and use it as an image cache, but Artifactory itself is hitting the rate limit at times proxying requests to Docker Hub. If it's a cache, why is it hitting image pull rate limits? It should be requesting each image at most once, and after that just check if a new image for that tag has been published (which isn't affected by the rate limit), and serve from cache otherwise. Surely you aren't downloading more than 200 never-downloaded-before images from dockerhub every six hours?
|
# ? Nov 6, 2020 18:35 |
|
I thought all the cool kids were moving to githubs registry now anyway
|
# ? Nov 7, 2020 02:37 |
|
My stuff is in ECS and AWS so I can just push the images to ECR and it will only need to pull when there are updates to it. They announced they will have free public images in ECR, I suspect as a response and to also get people using ECR
|
# ? Nov 7, 2020 03:44 |
|
Imagine not vendoring third party dependencies in TYOOL2020
|
# ? Nov 7, 2020 04:34 |
|
the only acceptable package management system is git clone
|
# ? Nov 7, 2020 06:22 |
|
Methanar posted:the only acceptable package management system is git clone Ok Rob Pike
|
# ? Nov 7, 2020 13:42 |
|
I'm in a mixed environment, building Win/Mac software with some services deployed on Linux. What is the best pets -> cows option available to me currently? We are using VSphere already so spinning up VMs on demand for whichever version is being built (old versions = old XCode) may be the best of what's out there right now. Orka uses MacOS in Docker in MacOS so that K8s can orchestrate/manage. Is it feasible to roll your own version of this (MacStation doesn't offer it standalone, it's a service offering for their cloud build product). Is there anything in the Ansible/Salt/OpenStack/whatever world that is better suited? This is also test project for larger deployment(s) but they will be pure Linux so maybe I will have to accept the build servers and production services will not use the same solution. Edit: some other interesting options on Mac are Nix, Xcode Server, and Mac Sandbox. But I don't see any orchestration/ci tools leveraging these yet. mr_package fucked around with this message at 19:47 on Nov 9, 2020 |
# ? Nov 9, 2020 19:45 |
|
We have a bunch of mac minis in one of our colos. Other solutions were very brittle and/or lagged behind the latest versions of what our devs needed. Our mac solution consists of KVM over IP and a bunch of ansible scripts to reset/reformat machines on a periodic basis.
|
# ? Nov 10, 2020 00:16 |
|
My solution was make the Mac support group run some Mac mini's for me and the extent of my involvement is telling Jenkins to ssh to them.
|
# ? Nov 10, 2020 00:21 |
|
Hadn't seen Orka before that looks neat. Previously we'd split between on prem minis and macstadium minis. Another place was large enough that went to Apple to get them to agree to let us run Hackintosh on VMs as long as we had the same amount of hardware. That was cool because we'd use a jenkins slave on demand. Job comes in spin up the hackintosh, run the job, kill the VM.
|
# ? Nov 10, 2020 01:45 |
|
I am too tired to think about how to do this Gmail will only forward mail to one account. I've got a "shared" email account that I want to forward to my email and hers automatically, within 5 minutes. What's the best way to setup some serverless solution on reading a Gmail inbox and then forwarding every new email (including attachments) to two other Gmail accounts AWS or GCP is fine, and yeah I already have domains registered to both
|
# ? Nov 10, 2020 04:16 |
|
Are you sure you couldn't do that with filters, or setting up a Google Group?
|
# ? Nov 10, 2020 04:30 |
|
Hadlock posted:I am too tired to think about how to do this Why does it have to be a gmail account? You can just do this with SES. One of my email addresses only exists in SES and all it does is forward to my gmail via serverless job.
|
# ? Nov 10, 2020 12:11 |
|
New Relic/Data Dog/splunk... What would y'all goons be using greenfield for a full stack monitoring solution? Bonus if we can get some amount of SEIM functionality out of it
|
# ? Nov 14, 2020 03:34 |
|
Latest Elasticsearch stack release is pretty compelling with the release of the agent greatly simplifying configuration compared to the past jumble of YAML and cursing, although Painless was named ironically I swear and is still necessary sometimes Disclosure: employed at Elastic, but I'd have been saying to take a peek even if I wasn't. The hosted offerings have gotten significantly better in the past few years, too
|
# ? Nov 14, 2020 03:45 |
|
Gyshall posted:New Relic/Data Dog/splunk... What would y'all goons be using greenfield for a full stack monitoring solution? Bonus if we can get some amount of SEIM functionality out of it Honestly, my experience has been it doesn't really matter which provider you use it's all about the quality of data you're putting into it: if your logs suck and you don't clearly label your metrics then you'll have a nightmare ever trying to do anything meaningful with them. This becomes even more true when you start adding APM and application traces to the mix. New Relic's new interface really sucks and I hate interacting with it, especially during incident response. Data Dog seems pretty good for logs and infra metrics but I've got very limited experience with it. SumoLogic's got by far my favourite interface for working with logs but I've had basically no exposure to their metric stuff.
|
# ? Nov 14, 2020 03:57 |
|
I’d lean towards elasticsearch and if in AWS their fork while troublesome as an example of stealing is a more compelling story to me than cloudwatch. NewRelic is really good at app performance, I’ve never been impressed with anything else they do. The infra monitoring tool is not super useful to me. Datadog is way better then NewRelic at the infra side, and logging side, but it’s expensive as all hell, and they charge outrageous sums for “custom” metrics. But here’s the thing, I mostly key off of app health which is covered by NewRelic, and wide scale system health is surfaced via the prom stack for me. It also allows me to gather whatever the hell metrics I want and really dive deep on issues. If I tried to send that to a manager provider I’d be looking at 100k a month as opposed to 10k. I also don’t care about individual node health, as K8s catches the most egregious things out of the box, and health checks on individual apps catch the rest. My ideal setup is Prometheus forwarding to a central cortex, which then has grafana connected to it, but that’s not for the faint of heart. I’ve heard good things re victormetrics as well as a remote write target. I’m of course biased with systems that are designed to be able to do those sorts of things. If I was just monitoring whatever, my old standby is sensu. I guess datadog is ok? Not sure it has siem support out of the box though.
|
# ? Nov 14, 2020 04:17 |
|
something about the name "data dog" rubs me the wrong way and I can't get over it
|
# ? Nov 14, 2020 04:48 |
|
my homie dhall posted:something about the name "data dog" rubs me the wrong way and I can't get over it Is it because it's noun noun instead of adjective noun like god intended?
|
# ? Nov 14, 2020 06:30 |
|
freeasinbeer posted:My ideal setup is Prometheus forwarding to a central cortex, which then has grafana connected to it, but that’s not for the faint of heart. This is what we do except thanos. I'm told we have the world's largest thanos footprint And it's several million dollars a year cheaper than the datadog it replaced lmao. freeasinbeer posted:If I was just monitoring whatever, my old standby is sensu. I guess datadog is ok? Not sure it has siem support out of the box though. The cool thing about sensu is having to start 10 ruby VMs every minute on every host
|
# ? Nov 14, 2020 07:04 |
|
New Relics APM is very nice, infra monitoring pretty crappy. Datadogs customer support blows big time. Also not that much of a fan of their tooling in general. I really like the way Elastic is moving forward. Especially now they’re working on a unified agent so I don’t have to deploy metric, file, log, audit beats seperately. Currently using it as log and metrics platform. Might incorporate APM in the near future as well.
|
# ? Nov 14, 2020 11:02 |
|
If I was starting out now I’d pick either InfluxDB or Prometheus for ops team usage and pair it with an Elasticsearch stack rolled up for lower resolution and downstream eventing with integrations for ML folks that work with their preferred tooling. Elastic acknowledges its limitations for a high resolution, high dimensional set of metrics that is best done by specialized tools and there’s efforts to make it more efficient and to work out of the box like modern SaaS based solutions. It’s pretty amazing how fast I got monitoring working recently with an Elastic agent compared to configuring Splunk, New Relic, and an endpoint monitoring system at a previous job. You get all three of those now (and then some) in one agent that’s developed in the open. Custom metrics means writing a Metricbeat module but I made it work well enough. I definitely have some gripes with Kibana still with how log filtering and querying ad box works compared to what I got from Data Dog or Sumologic, and without tuning the backend it takes a while to get query responses for long time windows (30+ days at only 30s intervals). But what I see is that a lot of companies want to consolidate things down and when push comes to shove no ML person or data scientist wants to work with these proprietary monitoring systems. Alerting in Elasticsearch’s ecosystem kinda sucks for ops people IMO but it’s gotten better recently at least. Still kind of rudimentary. But this is why I’m also for a heterogeneous stack still
|
# ? Nov 14, 2020 16:48 |
|
We use data dog for metrics and monitoring and it's fine I guess. Lot fewer things to configure and keep running versus most other comparable metrics platforms. If I had to solve logging I'd go with a managed elastic setup 'cause right now we dogfood our siem product's log management tool and it's not great for application logging.
|
# ? Nov 14, 2020 16:56 |
|
Anyone tried out Loki and have impressions? I've been wanting to, but my department is still content with old school text files and rsyslog so it's hard to build a case to get them to consider something newer.. but if I can set up something super awesome maybe I can drag them into this century. Assuming the tool is any good that is. I've been nothing but impressed with Prometheus, the number of metrics it can handle on cheap hardware is pretty amazing. To be fair my install replaced a ganglia setup which had a long history of obliterating hard drives so pretty much anything would seem great. But I haven't been able to break Prometheus yet.
|
# ? Nov 14, 2020 18:48 |
|
xzzy posted:Anyone tried out Loki and have impressions? I've been wanting to, but my department is still content with old school text files and rsyslog so it's hard to build a case to get them to consider something newer.. but if I can set up something super awesome maybe I can drag them into this century. Loki is “ok”, it’s basically cortex though and I’d complicated as hell to run.
|
# ? Nov 14, 2020 20:02 |
|
There's worse options than Loki if your logs' access pattern is write-once-read-maybe, and much better ones if it isn't. The thing I like about it is that it's one of the lightest-weight log aggregators you can run on a desktop Kubernetes cluster or something
|
# ? Nov 15, 2020 02:16 |
|
Prometheus grafana is dead easy to get rolling, particularly if you have a k8s cluster to deploy it to, although really prometheus will just boot with sane defaults off a single binary Grafana isn't much harder to get going For logging, if you have the budget for it, just do hosted splunk and move on with your life Loki is cool but fairly new. Too new for prod in my opinion, but when we use it in develop it's fantastic
|
# ? Nov 16, 2020 09:51 |
|
|
# ? May 15, 2024 03:42 |
|
Vulture Culture posted:There's worse options than Loki if your logs' access pattern is write-once-read-maybe, and much better ones if it isn't. The thing I like about it is that it's one of the lightest-weight log aggregators you can run on a desktop Kubernetes cluster or something What else do you consider good for write once read maybe? Logging costs are a thing at my current level place and would love some trip reports.
|
# ? Nov 17, 2020 02:37 |