|
a hot gujju bhabhi posted:Thanks for the info, super helpful. I looked at the traffic using tcpdump as suggested and it definitely initiates using a different port each time, but always requests port 80 on the LB. Is this a problem? Sorry for the potentially stupid question, I'm far more Dev than Ops unfortunately. It should request port 80 because that's the port on the LB that's listening for requests to do its' LB thing. I mean, you could change that port, but then it would always request whatever port you changed it to because the load balance service is listening on that port, but the source port should always change for each connection.
|
# ? Oct 17, 2019 22:00 |
|
|
# ? May 21, 2024 18:15 |
|
After some further investigation it seems like actually the load balancer itself is doing okay, but one of the servers just seems to have a much harder time processing requests. In other words its CPU spikes regularly even though it's handling the same volume as the others. Unfortunately this was all set up long before my arrival so these VMs are far from immutable, in fact they're significantly mutated, so it wouldn't surprise me if there's some configuration defect on that specific VM. Thanks for the help anyway guys, I definitely learned a lot.
|
# ? Oct 17, 2019 22:26 |
|
Throwing in my two cents on DocumentDB - it has performed far worse than MongoDB in each of our performance tests, to the point where we jump from 20% of our time in Mongo to >80% of our time in DocDB. I can’t in all honesty recommend it if latency/performance is critical.
|
# ? Oct 18, 2019 00:58 |
|
a hot gujju bhabhi posted:After some further investigation it seems like actually the load balancer itself is doing okay, but one of the servers just seems to have a much harder time processing requests. In other words its CPU spikes regularly even though it's handling the same volume as the others. Unfortunately this was all set up long before my arrival so these VMs are far from immutable, in fact they're significantly mutated, so it wouldn't surprise me if there's some configuration defect on that specific VM. Automate the whole server config and then burn them all to the ground! DevOps anarchy!
|
# ? Oct 18, 2019 14:30 |
|
You can use a tool like goss to identify a lot of system configurations, emit a policy, and then use the tests to validate any new container or EC2 instance you build with automation.
|
# ? Oct 19, 2019 02:31 |
|
necrobobsledder posted:You can use a tool like goss to identify a lot of system configurations, emit a policy, and then use the tests to validate any new container or EC2 instance you build with automation. This looks amazing, but sadly Linux only 😢
|
# ? Oct 19, 2019 06:21 |
|
How about https://devblogs.microsoft.com/scripting/reverse-desired-state-configuration-how-it-works/
|
# ? Oct 19, 2019 12:34 |
|
For learning purposes, I'm creating a Twitter bot that hourly tweets random lyrics/phrases and I need to decide whether I should implement it on Lambda or EC2 (free tier). The basic gist is: code:
What do you think? At most the bot will only run 24 times a day.
|
# ? Nov 7, 2019 03:59 |
|
Schneider Heim posted:I prefer to go down the road not taken. Do it in azure functions
|
# ? Nov 7, 2019 04:28 |
|
Schneider Heim posted:For learning purposes, I'm creating a Twitter bot that hourly tweets random lyrics/phrases and I need to decide whether I should implement it on Lambda or EC2 (free tier). This is easily doable in Lambda, reading files from S3 is a very straightforward operation. If you haven't used FaaS before, that sounds like a great first project. Just be sure to do your file i/o in /tmp. For bonus points, figure out how to keep the function warm and check /tmp first to see if your files are still there before downloading from S3.
|
# ? Nov 7, 2019 17:42 |
|
I have a CloudWatch log group (made of multiple log streams) and a feed of CloudTrail events (basically a change log/access log for an S3 bucket) that I want to compose into a single log stream for ease of viewing, searching, and auditing. I was under the impression that I could accomplish this with CloudWatch itself, but on further investigation, I’m not sure if that’s actually the case. What’s my best option for composing a CloudWatch log group and a series of CloudTrail events into one single log stream?
|
# ? Nov 15, 2019 21:16 |
|
You know what's dumb as hell? The fact that AWS doesn't publish a list of changes that are coming. Sure, your TAM can tell you what changes are coming, but it's under NDA. Also not every org has a TAM and in some orgs the TAM is gated through bureaucracies that don't pass along information that may not be relevant to them but is for other people. That said, it's loving hilarious that one of my coworkers spent like 2 weeks rebuilding some of our infrastructure only for an announcement to come out today that would have eliminated those two weeks of work.
|
# ? Nov 15, 2019 21:56 |
|
AWS employees don’t hear about upcoming launches at retInvent until re:invent so yeah, I feel your pain.
|
# ? Nov 15, 2019 22:50 |
|
In my previous org experience (I officially become a TAM in one day!!) If you're doing a thing that's pushing the edges of an AWS service and have a nice TAM you can get included on alpha and beta programs with the service teams being your point of call. I've done one usability study of an upcoming product and my previous team are on an alpha product. Both of which I can't tell you anything more specific, but getting involved is definitely related to several stars aligning. Cancelbot fucked around with this message at 13:54 on Nov 17, 2019 |
# ? Nov 17, 2019 13:52 |
|
We’re working on being better about this, at least in the groups I work with like containers and app mesh. IMO while re:invent and other conferences are a great way to broadcast major features/services to customers, the best thing we can do is get our plans and designs in front of as many potential users as possible, as early as possible.
|
# ? Nov 18, 2019 02:33 |
|
Loving all the new announcements coming out prior to re:Invent. This will be my first time going to re:invent. Any tips from people who have been there before?
|
# ? Nov 20, 2019 20:17 |
|
Be careful how you book sessions. If you are sloppy you can walk ten miles in a single day, like a customer did.
|
# ? Nov 20, 2019 21:13 |
|
Agrikk posted:Be careful how you book sessions. If you are sloppy you can walk ten miles in a single day, like a customer did. New service: AWS Exercise.
|
# ? Nov 21, 2019 03:17 |
|
Agrikk posted:Be careful how you book sessions. If you are sloppy you can walk ten miles in a single day, like a customer did. Last year was my first time and it was definitely a learning experience. My big takeaways were: 1) Try to book all your sessions for a day in the same venue. Or at least schedule things so you only need to change venues once in a day. 2) Give yourself at least an hour to move between venues. 3) The MGM Grand is the worst hotel option because of 1 and 2.
|
# ? Nov 21, 2019 06:31 |
|
I have an environment setup in AWS where I have a bastion instance in a public subnet and multiple other ec2 instances in private subnets. I have an EFS setup and mounted on all the machines in the private subnets. What is the best method for transferring files between my PC and the EFS? As it sits now, to get a file from the EFS to my local machine, I have to scp it out to the bastion instance and then scp it from the bastion back to my PC. I noticed AWS DataSync but that seems to be for copying huge swaths of data to an EFS in one fell swoop rather than transferring individual log files from time to time like I'm trying to do. Is there a better way than secure copying the file twice to get it back to my machine?
|
# ? Nov 22, 2019 21:15 |
|
Either proxy a ssh session through the bastion and use scp or i believe you can tunnel it through a ssm-session
|
# ? Nov 22, 2019 21:36 |
|
Possibly overkill, but you could configure an S3 VPC Endpoint, and push/retrieve the files from S3.
|
# ? Nov 22, 2019 21:42 |
|
IMO, you should never actually be logged into bastions. Create local SSH configs that use ProxyJump to bounce through the bastion. Then it’s totally transparent you’re using a bastion at all. scp away.
|
# ? Nov 22, 2019 22:51 |
|
I like all of the above suggestions. Either configure SSH proxy or some kind of S3 + Lambda setup that copies files to the destination every time a new object shows up in the bucket.
|
# ? Nov 23, 2019 00:17 |
|
crazypenguin posted:IMO, you should never actually be logged into bastions. How didn't I know about this? Awesome! Thank you.
|
# ? Nov 23, 2019 05:42 |
|
Curious to hear some feedback from anyone who has tried LocalStack for developing AWS stuff without actual resource spin up and pull down. It looks super promising to me, but I've never used it in practice, I'm keen to hear some thoughts from you much more experienced gurus?
|
# ? Nov 29, 2019 10:48 |
|
It's good and definitely worth using but there are some weird gotchas the further out from the popular services you get.
|
# ? Nov 29, 2019 15:38 |
|
LocalStack has quirks around stuff like IAM that won't work terribly well if you're doing complex stuff like cross-account workflows but it's fine for unit tests in theory. Problem is, if you're doing that and calling it a unit test, you might as well use Moto. Moto's bugs are easier to understand and hack through and it also works as a network service, too.
|
# ? Nov 29, 2019 17:39 |
|
Fwiw localstack is a wrapper over Moto. To give the OP some perspective, our product relies heavily on S3, SQS, and SNS which work pretty well in localstack so we use it to spin up a solid approximation of our environment on each engineer's machine rather than having a dev AWS account for that kind of stuff.
|
# ? Nov 29, 2019 17:58 |
|
a hot gujju bhabhi posted:Curious to hear some feedback from anyone who has tried LocalStack for developing AWS stuff without actual resource spin up and pull down. It looks super promising to me, but I've never used it in practice, I'm keen to hear some thoughts from you much more experienced gurus? localstack is ok for when you just need to satisfy a dependency and can't be bothered to swap out sqs/sns/whatever for something locally runnable but it's not a very accurate recreation of the services it replaces so you can't rely on it if you're testing the thing itself
|
# ? Nov 30, 2019 04:24 |
|
I just took and passed my AWS solutions architect exam and wanted some advice on were to go from here. I work in networking primarily, I dont use AWS at all in my day to day work. But I had a lot of fun studying for the exam, and did some rinky dink stuff like building wordpress sites playing around with EC2, building home backup solutions for myself with S3 etc. I dont expect to get some high paying job using AWS based just on that cert but I also cant afford to take a pay cut for an entry level job that uses it. I'm also not a programmer that can take advantage of a lot of AWS services. Based on all of the above are any additional certs worth studying for? Or am I better off just trying to build stuff for myself and my organization? What a long winded post, I guess I just wanted to celebrate passing and to say how fun the exam was to study for.
|
# ? Dec 2, 2019 19:24 |
|
Grats on passing! At this point, getting actual experience with real, non-toy work projects will probably be the most valuable thing. If you can in your current role. The cert is just a starting point, but a very good one, since AWS is so sprawling and complex that it's very easy to design a system that totally sucks or costs 10x what it should. So having that foundation of "here is how not to immediately shoot your foot off in the cloud" is awesome There's also the "DevOps" certification track, which as I understand it is more day two operational tasks. How to keep the thing the solutions architect handed you running, monitored, secured, etc. If that sounds interesting that's another area you could explore. There are TONS of jobs out there looking for people with AWS expertise. If you can get some real projects under your belt you should definitely be able to command a raise and/or a new job if you want it. I've even come across multiple networking specific jobs, and I wasn't even directly looking for them. Usually wanting someone with a traditional networking/security background plus some cloud chops to build a hybrid on-prem/cloud solution. So if you can prove you know your way around both a router CLI and VPCs, Direct Connect, Transit Gateways, etc there are opportunities for you out there.
|
# ? Dec 3, 2019 02:17 |
|
Docjowles posted:Words Thanks for the info and the kind words Doc. I'll try and apply the knowledge to projects at work that make sense, and if not just keep plugging away here and there on personal projects for fun.
|
# ? Dec 3, 2019 20:54 |
|
Got my devops pro booked for later this week, how representational are the acg practice exams?
|
# ? Dec 3, 2019 21:05 |
|
For AWS Transfer, is there any way to set the "Restricted" flag for a user account programmatically? Using CLI I'm not seeing any value in the "describe-user" JSON that matches that setting and don't see it as an option in the "create/user-user" command. I have a client that wants to use the same bucket for a ton of clients and generated and locking their user accounts to their S3 key location without clicking through dozens of accounts would be nice.
|
# ? Dec 5, 2019 18:19 |
|
PierreTheMime posted:For AWS Transfer, is there any way to set the "Restricted" flag for a user account programmatically? Using CLI I'm not seeing any value in the "describe-user" JSON that matches that setting and don't see it as an option in the "create/user-user" command. I have a client that wants to use the same bucket for a ton of clients and generated and locking their user accounts to their S3 key location without clicking through dozens of accounts would be nice. I takes to my company’s TAM recently asking if it was possible, and their response was basically “we’ll keep it in mind”. Prolly not happening anytime soon. I feel your pain friend We’ve been using scope-down policies in lieu of that option. The details escape me right now, but I can ask our SREs to explain tomorrow. There might even be docs on it somewhere, but I don’t know for sure.
|
# ? Dec 5, 2019 23:40 |
|
Where did my post go? EDIT: Oh there it is.
|
# ? Dec 6, 2019 00:36 |
|
Pollyanna posted:I takes to my company’s TAM recently asking if it was possible, and their response was basically “we’ll keep it in mind”. Prolly not happening anytime soon. I feel your pain friend I got a response back from support and the answer is (as is a lot of things) it works but only if you enter it in a specific undocumented way. Now with an example I should be able to get everything going. They said they already put in the documentation update request, so there's progress at least. Support notes: "In this command note we dont need to use the "--home-directory" parameter. This is because its specified inside "--home-directory-mappings" under the target." Example: code:
PierreTheMime fucked around with this message at 20:57 on Dec 7, 2019 |
# ? Dec 7, 2019 20:48 |
|
Not sure what thread this should go to, but I want to get an elastic ip and vpn it to a set of containers on my NAS. Is that just going to be a Vpc, elastic ip and vpn endpoint? Or is there more to it than that? Rational: I need to upgrade my ec2 to a higher machine class or I could just use my home nas but I don’t want people knowing my home ip / have a stable ip when it changes.
|
# ? Dec 9, 2019 00:17 |
|
|
# ? May 21, 2024 18:15 |
|
I'm bouncing all over the place with services lately and have hit another issue. I'm trying to use a Cognito User Pool to control access to an API Gateway, but while setting up my Authorizer it fails testing (returns "Unauthorized request") using an access_token from my REST call to Cognito. I've looked over a few example videos I seem to be doing the same things, mine just doesn't work. Anyone hit this issue before? I'm sure it's something really simple I'm missing.
|
# ? Dec 9, 2019 19:47 |