|
We have a devops team but we neglected to invite them to our meeting about idk, monitoring our services or w/e, we're probably doing this wrong. I'm kind of in a position where I could in theory introduce and apply good devopsy practices to our stuff, but I'm not really confident enough to do much but go with the flow of the existing, possibly suboptimal approaches...
|
# ? Aug 5, 2017 20:38 |
|
|
# ? Jun 3, 2024 21:51 |
|
Vanadium posted:We have a devops team ... we're probably doing this wrong. Having a devops team is doing it wrong by definition.
|
# ? Aug 5, 2017 21:36 |
|
Yeah thats why i brought it up
|
# ? Aug 5, 2017 23:33 |
|
Pollyanna posted:Ive always seen SRE stuff described as "be on Pagerduty and get frantic calls when alarms go off" while "doing devops" is Docker, CICD, and AWS. SRE work is a superset of software engineering that deals not only with CI/CD but also platform stability. It can definitely have PagerDuty work but a lot of it is looking at existing architecture and performance and helping development teams identify bottlenecks and issue areas and helping them reengineer with an eye for operational improvement.
|
# ? Aug 5, 2017 23:43 |
|
There's a rainbow of different kinds of titles that seem to be oriented primarily around the axis of "how close to dealing with the toil of classic sysadmin-style operations work do you have to do day-to-day?" but all of them should be treating sshing into machines to fix random problems that show up like it's cancer. Nobody ever has a title called "Agile engineer" or "Program Engineer" or an "Agile Group" but boy are there tons of jobs with "devops" in the name. They do seem to be universally from big, bureaucratic companies that are not respected by software professionals. Most other organizations use terms like "Infrastructure Engineer," SRE, and "Cloud Engineer" that are actually more generic-sounding and reflect that they are a type of software engineer moreso than a computer janitor that you throw your stuff farted out by Jenkins at and go back to a world where all you need to care about is if your code passes tests and builds in Jenkins and you might get a ticket later that your code is segfaulting in prod. Pollyanna posted:And drat near everything wants AWS experience now.
|
# ? Aug 6, 2017 01:47 |
|
necrobobsledder posted:Mark my words, AWS is the Java of infrastructure.
|
# ? Aug 6, 2017 04:44 |
|
I see it as far more than just infrastructure, but 99%+ of the huge companies I've seen utilize it as a big part of their strategy will really never be able to think of them as more than IaaS. Most of these CTOs have never worked for a tech company ever in their entire career I've seen. That's a big deal. I've worked with a handful of the top 10 largest AWS customers so far ($10MM+ / mo is the smallest) and they're treating it like datacenters and deny-by-default service offerings from a separate software vendor like MS, Oracle, etc. that must go through purchasing and so forth. It goes through review rightfully so but taking 1 year to approve Lambda and denying EFS is just sillypants. The deployment patterns I'm seeing still are lift and shifts taking 3+ years and rewrites that will take another 5+, so in 8 years they will be where most companies that use cloud somewhat right will have been 5 years ago while the technology gap between laggards and fast movers has grown larger. Crazy thing is that datacenter management is so bad at the massive companies $120MM/year for a couple hundred applications is a massive bargain. When you burn $2Bn / year on datacenters for worse uptimes and worse performance than AWS even if you let instances sit for 2+ years of uptime you're just bad at tech full-stop.
|
# ? Aug 6, 2017 15:47 |
|
necrobobsledder posted:The deployment patterns I'm seeing still are lift and shifts taking 3+ years and rewrites that will take another 5+, so in 8 years they will be where most companies that use cloud somewhat right will have been 5 years ago while the technology gap between laggards and fast movers has grown larger.
|
# ? Aug 6, 2017 17:45 |
|
Vulture Culture posted:If you see AWS predominantly as an infrastructure play, then hoo boy are you underestimating Amazon. I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much.
|
# ? Aug 6, 2017 18:52 |
|
StabbinHobo posted:you can consult on these projects drat near in perpetuity, but all you've gone and done is spent 5 years making decent money off of dying/failing orgs. Thermopyle posted:I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much.
|
# ? Aug 6, 2017 20:04 |
|
necrobobsledder posted:There's a ton of service offerings that fit the "80%+ of users' needs" featuresets on both AWS and GCP beyond "make me servers and networks." There's entire IoT device management / identity platforms, service desks, machine learning based analysis, etc. With such offerings all based around pay-as-you-go models, they can potentially create Salesforce-like businesses with every other thing they put out (not that they are, but iteration / MVP blah blah). The organization I'm at now is a victim in part due to AWS commoditizing what used to be "secret sauce" (admittedly, it really wasn't ever that hard IMO). And with so many companies being built on top of AWS now, it makes it super easy for Amazon to acquire companies and integrate them into the future.
|
# ? Aug 6, 2017 20:52 |
|
Thermopyle posted:I mostly use lambda nowadays, but I'd be interested in hearing what else you're talking about here as I don't really follow this space much. lambda at in some sense is one of those things, but * l4/l7 load balancers * intensely scalable key-value stores * exabyte-scale object storage * managed mysql/postres/oracle + custom variants like aurora * distributed tracing * metrics, logs, alarms * peta/exabyte-scale managed data warehouses * several managed ML and "big" data systems * build, deploy, and pipelines * managed git * CDN * declarative infrastructure * cloud windows desktops * managed exchange, documents * managed pub/sub * managed queues, both in the form of SQS (lower overall throughput but with a very simple API for synchronizing consumers) and kinesis (effectively infinite throughput but requires more individual coordination) * managed state machines * managed work chat * managed call center * multi-account organizations * a particularly thorough permissions system for all of this and there's things ive missed, and things that have yet to be released EDIT: to make this less of an appeal to list, i'll add that most of this stuff is free or effectively free on top of the cost of compute/storage/networking. and its also billed in fractional increments of fractional periods of time, so you really do pay for just what you use. the biggest downside of this (and a thing we need to focus on correcting) is that its so easy to forget about all the things you've started using and end up with a bill that doesnt accurately reflect what you actually used FamDav fucked around with this message at 22:52 on Aug 6, 2017 |
# ? Aug 6, 2017 22:28 |
|
What are some ways/systems people are using to manage media (videos, audio, etc). Don't think I want to store them in git, even with LFS enabled. We are just putting them onto s3 or they get manually uploaded. I was thinking of building a central CDN that can have content shared by multiple sites and content managers can just link them directly, but it would be nice to have some kind of back end CMS to browse it and store it in S3. A plus if it can keep track of things like duplicates etc. Would be nice to just hand out a login and let people browse around and upload new content if needed without teaching them how S3 works (Though for a lot of people just telling them to use Cyberduck is enough). Or if the media can be stored like packages somewhere and tied to the code builds, maybe that would work. EDIT: This looks kinda cool but more of a self hosted s3 https://www.minio.io JHVH-1 fucked around with this message at 17:47 on Aug 7, 2017 |
# ? Aug 7, 2017 17:01 |
|
S3 and EFS are your primary options for large file storage. I wouldn't use EFS unless you are only temporarily storing them due to its steep cost compared to S3, but EFS is substantially faster in throughput (S3 has maxed out at 75 MBps / file for us with 16 concurrent multi-part download transfers running, EFS can go above that easily). It sounds like you'll need some form of a content management system if you're trying to do more than give a UI to technical people. Heck, the product I work on is specifically built for people having trouble managing large multimedia files and pushing them to distributors. There's expensive stuff like T3Media / Wazee built for large content providers like sports networks but that sounds like it's overkill. Really, anything more than Cyberduck or whatnot and you're starting to get into CMS territory anyway. I'd setup an AWS lambda function to handle triggering of uploads and checksums, use EFS for temporary file storage to make checksums faster, run instances, use SQS for task queue, and setup a basic page hosted out of S3 for the UI, and store file checksums in S3. If it's not that high volume of users, your primary costs would be the instances for mounting the EFS CIFS / NFS endpoints and performing checksums. Only reason I'd go here is because Lambda functions that run a long time can get pricey and because transferring files out of S3 can be really slow on a single-threaded Lambda function (I don't think you are allowed to spawn like 16 threads to speed up S3 downloads as multi-part downloads).
|
# ? Aug 7, 2017 21:11 |
|
In VSTS/TFS, is there some way to tell it "Don't run my build definition multiple times in parallel"? I have some external dependencies that will break if I run a build definition twice concurrently. However, I cannot find any way to actually limit this. The subject is very difficult to Google, as well, since everyone seems to *want* more parallelism, so every answer is about how to make it happen.
|
# ? Aug 8, 2017 11:41 |
|
Everyone please lodge your complaints with the nature of "DevOps" jobs after I get a job. So Docker question. I can wrap my head around general containers just fine, but more "interactive" ones are escaping me. Is setting something up with the intention of having to pass commands regularly to the container's contents going against the whole point of the matter? Specifically this. My limited understanding is that the image is intended for use in a CI setup. I imagine that if you're passing a limited set of commands on a given interval, then it's fine to have Jenkins/Chef/whatever fire off a given input when needed. Is that about right?
|
# ? Aug 8, 2017 18:15 |
|
Using volume mountpoints in a container (read: persistent state) is accepted as long as you know how to manage them effectively. Most of the critics are talking about production situations when it comes to containerized applications and services.
|
# ? Aug 8, 2017 23:04 |
|
EssOEss posted:In VSTS/TFS, is there some way to tell it "Don't run my build definition multiple times in parallel"? I have some external dependencies that will break if I run a build definition twice concurrently. However, I cannot find any way to actually limit this. Assuming private/on-prem agents, not hosted: You can set a custom Demand on the build definition and choose one agent to assign a matching Capability, but then you're limited to it only ever running on that one agent, even if that agent is being used by other builds and there are idle agents. Fundamentally though, your build process is totally broken if multiple builds can't be run in parallel. I'd focus on fixing that problem. What is it doing with "external dependencies" that causes a failure? What are those external dependencies?
|
# ? Aug 8, 2017 23:41 |
|
Yeah, I do not want to limit it to one agent. For the sake of simplicity, you can imagine my build process uploading http://example.com/latestversion.exe. If two happen in parallel, the last one finishing wins and there's no way to know that the one that actually wrote it there was from the most recent checkin. Serializing the builds would be the easiest way to eliminate such issues.
|
# ? Aug 9, 2017 10:51 |
|
EssOEss posted:Yeah, I do not want to limit it to one agent. This seems like a weird thing you're doing anyway, but why not just have your build process emit versioned artifacts then have another job that marks them 'latest'? I know with TeamCity or Jenkins it should be pretty trivial to figure out which is more recent based on the source revision and name them appropriately.
|
# ? Aug 9, 2017 13:37 |
|
EssOEss posted:Yeah, I do not want to limit it to one agent. Use a Publish Artifacts step. Builds already natively have the ability to deal with publishing outputs so they can be available downstream.
|
# ? Aug 9, 2017 14:13 |
|
Anybody going to Boston DevOps Days?
|
# ? Sep 18, 2017 13:28 |
|
OWLS! posted:Anybody going to Boston DevOps Days? I'm on my way there right now if only the T would run a little faster.
|
# ? Sep 18, 2017 13:49 |
|
Blinkz0rz posted:I'm on my way there right now if only the T would run a little faster. as a Boston resident, I have bad news for you~ I went last year but had a conflict this week unfortunately and can't make it. Hope there's some good sessions! I always enjoy DevOps Days events, been to Denver and Boston so far.
|
# ? Sep 18, 2017 13:53 |
|
the t, run well when you need to? Lol. also oh boy goonops meet? Docjowles posted:I went last year but had a conflict this week unfortunately and can't make it. Boo OWLS! fucked around with this message at 14:22 on Sep 18, 2017 |
# ? Sep 18, 2017 14:08 |
|
Docjowles posted:as a Boston resident, I have bad news for you~ No trust me I'm intimately familiar with how bad the T is. I just usually avoid the orange line but welp
|
# ? Sep 18, 2017 14:21 |
|
Hi, relative Docker noob here. The Docker docs really strongly urge you to use (named) volumes as opposed to bind mounts. However, they're really poor at persuading me that volumes are so obviously superior, in fact I may be paranoid but they really sound like a pushy vendor when they do so. Here is what they say: quote:Volumes have several advantages over bind mounts: quote:You can manage volumes using Docker CLI commands or the Docker API. quote:Volumes work on both Linux and Windows containers. quote:Volumes can be more safely shared among multiple containers. quote:Volume drivers allow you to store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality. quote:A new volume’s contents can be pre-populated by a container. So far I'm leaning strongly towards bind mount points as being both easier to manage and easier to protect, while only using volumes for convenience when putzing around with data I don't mind losing. The one actual argument I've found against them is that they may incur a performance loss, but this appears to only be significant in non-Linux hosts. Am I wildly off base somewhere?
|
# ? Sep 22, 2017 11:18 |
|
The main thing with bind mounts is that they're fine for individual hosts, but manually-managed local-system fiddly-bits are an approach that doesn't scale well into a container orchestration system like Kubernetes, Swarm, or Nomad. It's also substantially harder to do them in a cross-platform way if you're trying to share build tooling with e.g. developers on Mac and Windows hosts, and removing that ability nukes a lot of the benefit that Docker provides in the first place. If you're just looking at Docker to do image-sharing and some namespace isolation to solve one very particular deployment problem, your approach is probably fine.
|
# ? Sep 22, 2017 14:22 |
|
Has anyone ever managed to weld Docker and Maven together in Jenkins so that it's possible to take advantage of the Maven plugin while running everything inside a container? Writing a Jenkinsfile that uses Docker? That's easy. Installing the Maven integration plugin and having it read the POM and figure everything out by itself? That's easy. Putting the two things together?
|
# ? Sep 25, 2017 17:07 |
|
The solution to most problems involving Jenkins plugins is to stop using plugins for anything that doesn't inherently require a Jenkins plugin.
|
# ? Sep 25, 2017 19:29 |
|
Plorkyeran posted:The solution to most problems involving Jenkins plugins is to stop using plugins for anything that doesn't inherently require a Jenkins plugin.
|
# ? Sep 25, 2017 22:35 |
|
Wait, Maven plugin AND Jenkins? Depending upon the Maven plugin used for Docker (some are substituting variables into Dockerfiles, others flat out generate them last I saw) you’re opening up Pandora’s box or signing up for cancer. You may want to try to stick with a Jenkinsfile that will launch a Maven or Gradle target / goal and nothing more to contain the configuration drift problems that tend to happen with Jenkins installs.
|
# ? Sep 26, 2017 00:48 |
|
necrobobsledder posted:Wait, Maven plugin AND Jenkins? Depending upon the Maven plugin used for Docker (some are substituting variables into Dockerfiles, others flat out generate them last I saw) you’re opening up Pandora’s box or signing up for cancer. I decided the Maven plugin could eat poo poo and I'm just running shell commands inside the container, as Grod intended. Also, why is the Jenkins documentation so sparse and why do the little strands that exist all suck? Is there a single coherent reference for the Jenkinsfile DSL anywhere, even though I already know the answer?
|
# ? Sep 26, 2017 02:52 |
|
There’s a doc by Wilson Mar on the DSL that I found useful in the past. https://wilsonmar.github.io/jenkins-plugins/ I know people have had the most success with Jenkins by simply using shell scripts in the repo. It makes it tougher to override some tool paths, but I wrote up a list of shell variables that will be available from Jenkins and life was good. Only thing left is to figure out a way to let developers control the container used to build their project for matrix builds.
|
# ? Sep 26, 2017 13:46 |
|
Me, 3 months ago: If you share unversioned deployment scripts across dozens of similar applications, eventually someone will make a breaking change and you'll get abrupt deployment failures. Client: That will never happen Them, 3 minutes ago: Someone made a breaking change to our deployment scripts and now releases are failing left and right! Sometimes I hate being right. Tomorrow: Implement versioning of their deployment scripts.
|
# ? Sep 28, 2017 03:05 |
|
I have taken over a home-grown build system that's a mix of windows, mac, and linux machines. Is there any good container-style way of deploying/managing code to the build servers in a mixed environment like this? If I could bundle up the build system code with the xcode/visualstudio/java/whatever versions without having to manage VMs I'd be happy. But maybe VMs are the answer and I should use the VMware API to allocate a corresponding VM each time I need to run a build? The macs are standalone and not part of the VMWare cluster but I guess I could add them. Right now I don't have any automation or configuration/orchestration stuff to manage the servers themselves, but on less than 10 machines it's not dire (just annoying). I really want to transition away from this though-- it would be awesome if I could dockerize xcode, buildscripts, and dependencies all together and worry less about the OS state. Is there an approach that's working for people?
|
# ? Oct 5, 2017 19:50 |
|
On noob docker chat... the thing to realize is that the majority of their recommendations apply to running 'in the cloud' as that's where most of it's use is. You can totally use it as a simple virtualized application environment on a single host though, just realize the intended market and all the guidance is going to be with the intention of multiple hosts in the cloud (see named volumes vs volume mounts). We use it heavily in our CI process to simplify agent requirements, the only things agents need are vagrant and docker and everything else provides it's own build environment via one or both of those and the builders just basically 'docker run' everything. It's nice when people need to pull down a project they're not working on too, don't need to figure out how to compile the dependencies or install them, can be up and running immediately.
|
# ? Oct 6, 2017 16:53 |
|
mr_package posted:I have taken over a home-grown build system that's a mix of windows, mac, and linux machines. Is there any good container-style way of deploying/managing code to the build servers in a mixed environment like this? Docker supports all those operating systems. Docker also supports multi-stage builds (so you first build an image that builds your app, out of which you build your app image). I have not actually used the latter due to having a fairly low-expectations build process but in theory, you can make transient containers that build your app container images with Docker. For any sort of scripting, use PowerShell Core which is multiplatform - this enables your multiplatform apps to use the same setup scripts on all platforms. No need to make different scripts per OS, just maybe some "if ($IsLinux)" statements here and there. For building the apps, I guess it depends. I am in a .NET shop, so it's either .NET Framework (build on Windows, deploy to any OS) or .NET Core (build on any OS, deploy to any OS). I generally keep my builds in Windows just for uniformity's sake, regardless of where I deploy to. However, you might not have that luxury if you are not .NET based. It is not clear to me what exactly your scenario is. You talk about XCode but then also talk about Windows. Go into details.
|
# ? Oct 6, 2017 18:14 |
|
These servers build apps for each platform e.g. Linux is building Android, Mac building for OSX, etc. so each platform does still run its native tools. I was just wondering what people use to keep things under control in this type of environment. If I could have an "OSX Build Server xcode 8.3" container and spin it up on demand for example, rather than installing xcode8.3 on the mac build servers themselves-- if I could containerize the whole build toolchain, that just seems kind of ideal. But I don't think we're there yet especially in terms of OSX support. (Docker on Mac is still virtualized Linux at its core so I don't think it can run xcodebuild etc.) Maybe for this use case vmware is still the best option.
|
# ? Oct 6, 2017 20:06 |
|
|
# ? Jun 3, 2024 21:51 |
|
mr_package posted:These servers build apps for each platform e.g. Linux is building Android, Mac building for OSX, etc. so each platform does still run its native tools. I was just wondering what people use to keep things under control in this type of environment. If I could have an "OSX Build Server xcode 8.3" container and spin it up on demand for example, rather than installing xcode8.3 on the mac build servers themselves-- if I could containerize the whole build toolchain, that just seems kind of ideal. But I don't think we're there yet especially in terms of OSX support. (Docker on Mac is still virtualized Linux at its core so I don't think it can run xcodebuild etc.) Maybe for this use case vmware is still the best option. I had a similar build server catalogue at my last place and an IT department resistant to VMs, so I ended up using Saltstack to auto install all my build dependencies and it basically went okay.
|
# ? Oct 6, 2017 20:10 |