|
? Human readable programming languages are a "bandaid" then. "Wow something isn't perfect? Then it's trash and using it is a complete waste of time." Better yet, let's not write any software at all until we have the perfect toolchain, that'll get us those flying cars quicker!
|
# ? Feb 17, 2020 01:07 |
|
|
# ? May 17, 2024 19:19 |
|
Absurd Alhazred posted:There are externalities to not pushing library developers to actually keep up with each the latest versions. Eventually you're going to run into combinatorial explosion. You're arguing against a world that doesn't exist. No one is saying everything is perfect now, stop improving things. They're saying keep improving and docker will let us work until then. I mean the people depending on OpenCV 3 aren't just going to stay there. Eventually they will get off of it, and in the meantime they can continue to work and exist because containers let then do so. Now, containers might slow down that process because it makes it less urgent. Or they might not because it let's entities continue to exist because their software keeps working...
|
# ? Feb 17, 2020 01:13 |
|
I honestly didn't expect the answer to be "developers should have anticipated and made their code compatible with a library that wouldn't exist for two years".
|
# ? Feb 17, 2020 01:18 |
|
Jabor posted:I honestly didn't expect the answer to be "developers should have anticipated and made their code compatible with a library that wouldn't exist for two years". That's not what I'm saying, but you're welcome to wrap my post in a docker that's compatible with your prejudices.
|
# ? Feb 17, 2020 01:20 |
|
Absurd Alhazred posted:That's not what I'm saying, but you're welcome to wrap my post in a docker that's compatible with your prejudices. You might need to be clearer about what you are saying then, because I'm struggling to think of another way to interpret "the rear end in a top hat is the person who's software is only compatible with the older version of the library".
|
# ? Feb 17, 2020 01:25 |
|
Honestly, there is a huge (light years) gap between: - if it's not in docker it's wrong and - oh well, I've tried everything and apparently docker is the only solution. Currently, the first approach seems to be the preferred one among a large swaths of developers or devops or god knows what kind of other technical people. And that is wrong. And it is a fad. Kernel cgroups is a good technology. It's here to stay since it provides so many benefits. But just going there "by default", sigh ... technical people should be better than that. "Do I need it now" should be the question first. And that question is not asked nor answered.
|
# ? Feb 17, 2020 01:28 |
|
Absurd Alhazred posted:(that's what breaks backwards compatibility between OpenCV 4 and 3, by the way), then the problem is you. There were API changes between opencv 3 & 4.
|
# ? Feb 17, 2020 01:45 |
|
Volguus posted:Honestly, there is a huge (light years) gap between: Some of the software I use and write benefits from Docker. Some of it doesn't, and could probably just be deployed the old-fashioned way. But once I already have a CI/CD pipeline in place that builds Docker images, publishes them to a Docker registry, and deploys them using a Docker orchestrator to keep track of versions, centralize logging, monitor metrics, and route public endpoints, then guess what? It becomes simpler, not more complex, to just add a dockerfile to every project and operate them all the same way, instead of each project having its own special snowflake deployment story.
|
# ? Feb 17, 2020 02:09 |
|
Math keeps changing JavaScript math is even more of a horror than the rest of the language.
|
# ? Feb 17, 2020 02:14 |
|
NihilCredo posted:Some of the software I use and write benefits from Docker. If all your projects are basically the same thing with just a different endpoint, then yes, you are definitely right. The software that I write is different enough that a different CI/CD pipeline is required for each project. My little snowflakes, I call them.
|
# ? Feb 17, 2020 02:29 |
|
Volguus posted:If all your projects are basically the same thing with just a different endpoint, then yes, you are definitely right. The software that I write is different enough that a different CI/CD pipeline is required for each project. My little snowflakes, I call them. This doesn't really make sense. Docker (well, OCI) images are an output format for deployable binary packages. Assuming that what you write can run on a Linux system, you can run the image build at the end of each special pipeline, and get a Docker-compatible image out of it. The steps for turning your "snowflake" output artifacts into a Docker image are in the Dockerfile specific to that artifact, so they exist alongside specific code and build definitions. All the rest - publishing to a registry, kicking off deployment automation, and so forth - is shared, because at that point it should just be shipping an opaque set of layer blobs. Have you ever built and run a Docker image? ultrafilter posted:Math keeps changing ¯\_(ツ)_/¯ It's not good but it just comes down to an ambiguous spec. The C99 and C10 specs have the same ambiguity. Java puts some strict bounds on error in their spec for basic floating point math functions, but also explicitly reserves the right to change exact outputs. If extremely precise math is important to you, then use a library with rigorously defined behavior. There are a lot of horrifying JS-specific things, but this particular problem is just a horrifying part of writing a language specification with enough wiggle room so that implementers aren't forced to make bad choices for their platform, but also enough detail that people developing programs against the spec can depend on it.
|
# ? Feb 17, 2020 03:18 |
|
I was pretty far from the deployment end of our cloud services, but if I wanted to update LLVM or something I could still pretty trivially update a dockerfile to build or grab the new LLVM version I needed for my update and the build system would happily chug along and create images with the new version for our build system, my development machine and the eventual cloud service provider. I could then point a config file in our code repo to the new images and be reasonably confident that if my update compiled and passed tests on the container running in docker on my windows machine then it would work the same way out in the clouds somewhere. I just assigned colors to pixels, I'm not a ops type, but docker seemed a reasonably painless way to do all this crap that unfortunately had to be done. Certainly it was a big improvement on what we had before where I had a local centos image that was subtly different from whatever the hell we deployed and any dependency updates would be communicated through an extensive game of jungle telegraph. I'm not sure what a "standards and contracts" alternative solution to all this would be: we and our cloud service provider agree on a precise environment and set of libraries, including exact build flags for something complex like LLVM, that we develop for, they promise to support and we then run on their barest metal? I don't think that would be appealing to most people, on both sides of the contract. If your point is just that we in fact shouldn't have to do this crap, ABIs should just be stable, libraries secure, and dependencies get along: sure, that would be nice, but unfortunately the world is a big human mess and they don't necessarily do any of those things. Xerophyte fucked around with this message at 03:49 on Feb 17, 2020 |
# ? Feb 17, 2020 03:44 |
|
Space Gopher posted:This doesn't really make sense. This assumes that the goal is to create a docker image. As I can see in this thread, this is the goal for a lot of you folks. Often times, the default goal. What I'm arguing against is this being the goal, or at least the default target of our output. Docker is not the purpose. Building software is the purpose. Docker is a tool that sometimes can help reach that purpose. It cannot be a goal in and of itself, it's merely a tool. A suitable tool in certain scenarios. Space Gopher posted:Have you ever built and run a Docker image? Yes I have. Both. But for, apparently, different purposes than you. One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment.
|
# ? Feb 17, 2020 04:12 |
|
Volguus posted:This assumes that the goal is to create a docker image. As I can see in this thread, this is the goal for a lot of you folks. Often times, the default goal. What I'm arguing against is this being the goal, or at least the default target of our output. Docker is not the purpose. Building software is the purpose. Docker is a tool that sometimes can help reach that purpose. It cannot be a goal in and of itself, it's merely a tool. A suitable tool in certain scenarios. I seriously doubt anyone in this thread has a goal of creating a docker image instead of creating usable software.
|
# ? Feb 17, 2020 04:34 |
|
I make software but it being usable isn't always a requirement.
|
# ? Feb 17, 2020 04:52 |
|
Thermopyle posted:I seriously doubt anyone in this thread has a goal of creating a docker image instead of creating usable software. https://forums.somethingawful.com/showthread.php?noseen=1&threadid=3409898&pagenumber=281&perpage=40#post499380078
|
# ? Feb 17, 2020 04:58 |
|
Seriously, though, the goal in that post is to mess around with the software package in isolation, not to make a docker image.
|
# ? Feb 17, 2020 05:13 |
|
Volguus posted:https://forums.somethingawful.com/showthread.php?noseen=1&threadid=3409898&pagenumber=281&perpage=40#post499380078 But...that doesn't prove your claim at all?
|
# ? Feb 17, 2020 05:15 |
|
ultrafilter posted:Math keeps changing Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902)
|
# ? Feb 17, 2020 09:33 |
|
Absurd Alhazred posted:It's loving 2020. If you're strongly dependent on a 3rd party library that depends on C++98 ABI (that's what breaks backwards compatibility between OpenCV 4 and 3, by the way), then the problem is you. This is some weak poo poo, to focus on the specific example given rather than the general class of problems it was obviously meant to represent
|
# ? Feb 17, 2020 11:42 |
|
more importantly than merely delivering working software, docker provides another way to wile away the time before we die
|
# ? Feb 17, 2020 11:42 |
|
Foxfire_ posted:Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902) Completelly agree.. Is the node test that is wrong. I would even expect to get different numbers on different hardware ( 32 vs 64 bits ) I may have a different expectation on some virtual machine based programming language that made a point about stability related to this- I don't think is the case for javascript --- but I honestly don't know
|
# ? Feb 17, 2020 13:24 |
|
Making a docker image is about as anodyne as making a zip file and not some symptom of things being profoundly wrong with the world anymore than everything else going on right now.
|
# ? Feb 17, 2020 14:19 |
|
There is no ethical containerization under capitalism.
|
# ? Feb 17, 2020 16:27 |
|
You're all debating thisHammerite posted:This is some weak poo poo, to focus on the specific example given rather than the general class of problems it was obviously meant to represent When we've already solved this argument with this redleader posted:we're all missing the greater point here in that in a perfect world, computers wouldn't exist
|
# ? Feb 17, 2020 16:33 |
|
CPColin posted:There is no ethical containerization under capitalism.
|
# ? Feb 17, 2020 17:34 |
|
Absurd Alhazred posted:There are externalities to not pushing library developers to actually keep up with each the latest versions. Eventually you're going to run into combinatorial explosion. Eventually is yesterday.
|
# ? Feb 17, 2020 18:09 |
|
Volguus posted:Yes I have. Both. But for, apparently, different purposes than you. One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment. Yeah, but instead of going through all that trouble, you could have handed them a docker image that was built with a Fedora base package. Then they wouldn't care if it was Debian or not.
|
# ? Feb 17, 2020 18:31 |
|
So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment? If so, Nix sounds like a good choice as well.
|
# ? Feb 17, 2020 18:41 |
|
Foxfire_ posted:Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902) The talk of it being relevant to science reproducibility w/o any mention of numerical stability is also not inspiring --- unless somehow this sort of thing makes an algorithm not work (but that's what stable algoritms are there for?), the difference is near certainly way less than the measurement uncertainty of the input data.. unless there is something really special going on which needs lots and lots of digits of precision for Science(tm) reasons, in which case you probably shouldn't be doing stuff with doubles anyway?
|
# ? Feb 17, 2020 18:46 |
|
hey this was before the weekend but 90+ posts of arguing about the Lizkov Containerization Principle didn't cover it soTraderStav posted:As someone still learning development I'm finding the middle part of your statement a big grey area that I can't resolve the difference between actual technologies or satire. I'm actually hoping none of it is satire so I can sound like a Brit when I talk as I approach that level of competency. knowing the terms isn't actually required. half the time you just have to vomit the same sequence at some other engineer, re-creating the solution you heard in their head with the context you lack. knowing things is optional, just work on repeating arbitrary strings of jargon
|
# ? Feb 17, 2020 19:00 |
|
Athas posted:So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment? No, it's the runtime environment. Being able to reproduce, share, and version a stable, known-good build environment is just a bonus.
|
# ? Feb 17, 2020 19:30 |
|
ratbert90 posted:Yeah, but instead of going through all that trouble, you could have handed them a docker image that was built with a Fedora base package. Then they wouldn't care if it was Debian or not. I was going through all that trouble anyway. If I am to piss around with docker, it would just make sense to spare others of the same fate. And I would have not been able to use a Fedora base image, but I would have had to use something that provides CUDA and tensorflow both (maybe it exists already, maybe not), so for me it wouldn't have been any gain whatsoever. And for them even less since they already have CUDA and tensorflow and nvidia drivers installed because of the work they do. Essentially providing a docker image (can you run an X11 UI application from docker even? Maybe export the display and forward ports...) and telling them to run that would have been a disservice and a headache for absolutely everyone, myself included. Work for work's sake, finding a reason to run docker. This is pretty much the definition of "docker is the goal".
|
# ? Feb 17, 2020 21:06 |
|
Volguus posted:One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment. The correct and proper way is to do more work for the same result? OddObserver posted:The talk of it being relevant to science reproducibility w/o any mention of numerical stability is also not inspiring --- unless somehow this sort of thing makes an algorithm not work (but that's what stable algoritms are there for?), the difference is near certainly way less than the measurement uncertainty of the input data.. unless there is something really special going on which needs lots and lots of digits of precision for Science(tm) reasons, in which case you probably shouldn't be doing stuff with doubles anyway? Right, standard JavaScript is not suitable to some applications, but people really want to do those things in it because it's the hammer they know.
|
# ? Feb 17, 2020 21:14 |
|
My takeaway from Docker is package managers were a mistake and everyone would rather deploy their applications like Windows DLLs.
|
# ? Feb 17, 2020 22:32 |
|
Athas posted:So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment? It's for both. I have multi-stage builds where the build is containerized using some given base container that contains the build toolchain, and then the result of the build (static binary or Python wheel or whatever) is transferred to a stripped-down container.
|
# ? Feb 17, 2020 23:11 |
|
Volguus posted:Ok, please help me here, what kind of technologies does Docker allow you to work on that you would have been otherwise unable to do so? I thought about them for about 20 seconds now and I cannot come up with an answer. I must have a very poor imagination. I don't think that there's any technology that absolutely needs docker, much like how there's not any nail that absolutely needs a hammer when a rock works well enough
|
# ? Feb 17, 2020 23:16 |
|
I don't see why anyone would need git, my convention of inserting version numbers into file names works well enough and anyone who thinks otherwise should be brought out back and shot. Now please excuse me, the nurses at my care home only permit one hour of shitposting per day
|
# ? Feb 17, 2020 23:23 |
|
QuarkJets posted:I don't see why anyone would need git, my convention of inserting version numbers into file names works well enough and anyone who thinks otherwise should be brought out back and shot. Now please excuse me, the nurses at my care home only permit one hour of shitposting per day Never thought we would have VMS posting here!
|
# ? Feb 17, 2020 23:31 |
|
|
# ? May 17, 2024 19:19 |
|
Good god some of you people are insufferable cunts.
|
# ? Feb 17, 2020 23:36 |