Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NtotheTC
Dec 31, 2007


? Human readable programming languages are a "bandaid" then. "Wow something isn't perfect? Then it's trash and using it is a complete waste of time."

Better yet, let's not write any software at all until we have the perfect toolchain, that'll get us those flying cars quicker!

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Absurd Alhazred posted:

There are externalities to not pushing library developers to actually keep up with each the latest versions. Eventually you're going to run into combinatorial explosion.

You're arguing against a world that doesn't exist. No one is saying everything is perfect now, stop improving things.

They're saying keep improving and docker will let us work until then.

I mean the people depending on OpenCV 3 aren't just going to stay there. Eventually they will get off of it, and in the meantime they can continue to work and exist because containers let then do so.

Now, containers might slow down that process because it makes it less urgent. Or they might not because it let's entities continue to exist because their software keeps working...

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
I honestly didn't expect the answer to be "developers should have anticipated and made their code compatible with a library that wouldn't exist for two years".

Absurd Alhazred
Mar 27, 2010

by Athanatos

Jabor posted:

I honestly didn't expect the answer to be "developers should have anticipated and made their code compatible with a library that wouldn't exist for two years".

That's not what I'm saying, but you're welcome to wrap my post in a docker that's compatible with your prejudices.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Absurd Alhazred posted:

That's not what I'm saying, but you're welcome to wrap my post in a docker that's compatible with your prejudices.

You might need to be clearer about what you are saying then, because I'm struggling to think of another way to interpret "the rear end in a top hat is the person who's software is only compatible with the older version of the library".

Volguus
Mar 3, 2009
Honestly, there is a huge (light years) gap between:

- if it's not in docker it's wrong

and

- oh well, I've tried everything and apparently docker is the only solution.

Currently, the first approach seems to be the preferred one among a large swaths of developers or devops or god knows what kind of other technical people. And that is wrong. And it is a fad. Kernel cgroups is a good technology. It's here to stay since it provides so many benefits. But just going there "by default", sigh ... technical people should be better than that. "Do I need it now" should be the question first. And that question is not asked nor answered.

fritz
Jul 26, 2003

Absurd Alhazred posted:

(that's what breaks backwards compatibility between OpenCV 4 and 3, by the way), then the problem is you.

There were API changes between opencv 3 & 4.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Volguus posted:

Honestly, there is a huge (light years) gap between:

- if it's not in docker it's wrong

and

- oh well, I've tried everything and apparently docker is the only solution.

Currently, the first approach seems to be the preferred one among a large swaths of developers or devops or god knows what kind of other technical people. And that is wrong. And it is a fad. Kernel cgroups is a good technology. It's here to stay since it provides so many benefits. But just going there "by default", sigh ... technical people should be better than that. "Do I need it now" should be the question first. And that question is not asked nor answered.

Some of the software I use and write benefits from Docker.

Some of it doesn't, and could probably just be deployed the old-fashioned way.

But once I already have a CI/CD pipeline in place that builds Docker images, publishes them to a Docker registry, and deploys them using a Docker orchestrator to keep track of versions, centralize logging, monitor metrics, and route public endpoints, then guess what? It becomes simpler, not more complex, to just add a dockerfile to every project and operate them all the same way, instead of each project having its own special snowflake deployment story.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Math keeps changing

JavaScript math is even more of a horror than the rest of the language.

Volguus
Mar 3, 2009

NihilCredo posted:

Some of the software I use and write benefits from Docker.

Some of it doesn't, and could probably just be deployed the old-fashioned way.

But once I already have a CI/CD pipeline in place that builds Docker images, publishes them to a Docker registry, and deploys them using a Docker orchestrator to keep track of versions, centralize logging, monitor metrics, and route public endpoints, then guess what? It becomes simpler, not more complex, to just add a dockerfile to every project and operate them all the same way, instead of each project having its own special snowflake deployment story.

If all your projects are basically the same thing with just a different endpoint, then yes, you are definitely right. The software that I write is different enough that a different CI/CD pipeline is required for each project. My little snowflakes, I call them.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Volguus posted:

If all your projects are basically the same thing with just a different endpoint, then yes, you are definitely right. The software that I write is different enough that a different CI/CD pipeline is required for each project. My little snowflakes, I call them.

This doesn't really make sense.

Docker (well, OCI) images are an output format for deployable binary packages. Assuming that what you write can run on a Linux system, you can run the image build at the end of each special pipeline, and get a Docker-compatible image out of it. The steps for turning your "snowflake" output artifacts into a Docker image are in the Dockerfile specific to that artifact, so they exist alongside specific code and build definitions. All the rest - publishing to a registry, kicking off deployment automation, and so forth - is shared, because at that point it should just be shipping an opaque set of layer blobs.

Have you ever built and run a Docker image?

ultrafilter posted:

Math keeps changing

JavaScript math is even more of a horror than the rest of the language.

¯\_(ツ)_/¯ It's not good but it just comes down to an ambiguous spec. The C99 and C10 specs have the same ambiguity. Java puts some strict bounds on error in their spec for basic floating point math functions, but also explicitly reserves the right to change exact outputs. If extremely precise math is important to you, then use a library with rigorously defined behavior.

There are a lot of horrifying JS-specific things, but this particular problem is just a horrifying part of writing a language specification with enough wiggle room so that implementers aren't forced to make bad choices for their platform, but also enough detail that people developing programs against the spec can depend on it.

Xerophyte
Mar 17, 2008

This space intentionally left blank
I was pretty far from the deployment end of our cloud services, but if I wanted to update LLVM or something I could still pretty trivially update a dockerfile to build or grab the new LLVM version I needed for my update and the build system would happily chug along and create images with the new version for our build system, my development machine and the eventual cloud service provider. I could then point a config file in our code repo to the new images and be reasonably confident that if my update compiled and passed tests on the container running in docker on my windows machine then it would work the same way out in the clouds somewhere.

I just assigned colors to pixels, I'm not a ops type, but docker seemed a reasonably painless way to do all this crap that unfortunately had to be done. Certainly it was a big improvement on what we had before where I had a local centos image that was subtly different from whatever the hell we deployed and any dependency updates would be communicated through an extensive game of jungle telegraph. I'm not sure what a "standards and contracts" alternative solution to all this would be: we and our cloud service provider agree on a precise environment and set of libraries, including exact build flags for something complex like LLVM, that we develop for, they promise to support and we then run on their barest metal? I don't think that would be appealing to most people, on both sides of the contract.

If your point is just that we in fact shouldn't have to do this crap, ABIs should just be stable, libraries secure, and dependencies get along: sure, that would be nice, but unfortunately the world is a big human mess and they don't necessarily do any of those things.

Xerophyte fucked around with this message at 03:49 on Feb 17, 2020

Volguus
Mar 3, 2009

Space Gopher posted:

This doesn't really make sense.

Docker (well, OCI) images are an output format for deployable binary packages. Assuming that what you write can run on a Linux system, you can run the image build at the end of each special pipeline, and get a Docker-compatible image out of it. The steps for turning your "snowflake" output artifacts into a Docker image are in the Dockerfile specific to that artifact, so they exist alongside specific code and build definitions.

This assumes that the goal is to create a docker image. As I can see in this thread, this is the goal for a lot of you folks. Often times, the default goal. What I'm arguing against is this being the goal, or at least the default target of our output. Docker is not the purpose. Building software is the purpose. Docker is a tool that sometimes can help reach that purpose. It cannot be a goal in and of itself, it's merely a tool. A suitable tool in certain scenarios.

Space Gopher posted:

Have you ever built and run a Docker image?

Yes I have. Both. But for, apparently, different purposes than you. One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Volguus posted:

This assumes that the goal is to create a docker image. As I can see in this thread, this is the goal for a lot of you folks. Often times, the default goal. What I'm arguing against is this being the goal, or at least the default target of our output. Docker is not the purpose. Building software is the purpose. Docker is a tool that sometimes can help reach that purpose. It cannot be a goal in and of itself, it's merely a tool. A suitable tool in certain scenarios.

I seriously doubt anyone in this thread has a goal of creating a docker image instead of creating usable software.

Xik
Mar 10, 2011

Dinosaur Gum
I make software but it being usable isn't always a requirement.

Volguus
Mar 3, 2009

Thermopyle posted:

I seriously doubt anyone in this thread has a goal of creating a docker image instead of creating usable software.

https://forums.somethingawful.com/showthread.php?noseen=1&threadid=3409898&pagenumber=281&perpage=40#post499380078

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

:iceburn:


Seriously, though, the goal in that post is to mess around with the software package in isolation, not to make a docker image.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


But...that doesn't prove your claim at all?

Foxfire_
Nov 8, 2010

ultrafilter posted:

Math keeps changing

JavaScript math is even more of a horror than the rest of the language.

Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902)

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Absurd Alhazred posted:

It's loving 2020. If you're strongly dependent on a 3rd party library that depends on C++98 ABI (that's what breaks backwards compatibility between OpenCV 4 and 3, by the way), then the problem is you.

This is some weak poo poo, to focus on the specific example given rather than the general class of problems it was obviously meant to represent

redleader
Aug 18, 2005

Engage according to operational parameters
more importantly than merely delivering working software, docker provides another way to wile away the time before we die

Tei
Feb 19, 2011

Foxfire_ posted:

Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902)

Completelly agree.. Is the node test that is wrong.

I would even expect to get different numbers on different hardware ( 32 vs 64 bits )

I may have a different expectation on some virtual machine based programming language that made a point about stability related to this- I don't think is the case for javascript --- but I honestly don't know

1337JiveTurkey
Feb 17, 2005

Making a docker image is about as anodyne as making a zip file and not some symptom of things being profoundly wrong with the world anymore than everything else going on right now.

CPColin
Sep 9, 2003

Big ol' smile.
There is no ethical containerization under capitalism.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
You're all debating this

Hammerite posted:

This is some weak poo poo, to focus on the specific example given rather than the general class of problems it was obviously meant to represent

When we've already solved this argument with this

redleader posted:

we're all missing the greater point here in that in a perfect world, computers wouldn't exist

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


CPColin posted:

There is no ethical containerization under capitalism.

:hai:

Arsenic Lupin
Apr 12, 2012

This particularly rapid💨 unintelligible 😖patter💁 isn't generally heard🧏‍♂️, and if it is🤔, it doesn't matter💁.


Absurd Alhazred posted:

There are externalities to not pushing library developers to actually keep up with each the latest versions. Eventually you're going to run into combinatorial explosion.

Eventually is yesterday.

FlapYoJacks
Feb 12, 2009

Volguus posted:

Yes I have. Both. But for, apparently, different purposes than you. One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment.

Yeah, but instead of going through all that trouble, you could have handed them a docker image that was built with a Fedora base package. Then they wouldn't care if it was Debian or not.

Athas
Aug 6, 2007

fuck that joker
So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment?

If so, Nix sounds like a good choice as well.

OddObserver
Apr 3, 2009

Foxfire_ posted:

Getting an exact particular float from an complicated math operation implemented in a library is not a reasonable expectation. Floating point values are not real numbers and if you're implementing numerical code, you need to deal with that. (Especially since the value he's looking for is not the closest double to the correct one: he expects gamma(11.54)=13098426.039156161 and the actual closest one is ~13098426.039075902)

The talk of it being relevant to science reproducibility w/o any mention of numerical stability is also not inspiring --- unless somehow this sort of thing makes an algorithm not work (but that's what stable algoritms are there for?), the difference is near certainly way less than the measurement uncertainty of the input data.. unless there is something really special going on which needs lots and lots of digits of precision for Science(tm) reasons, in which case you probably shouldn't be doing stuff with doubles anyway?

JawnV6
Jul 4, 2004

So hot ...
hey this was before the weekend but 90+ posts of arguing about the Lizkov Containerization Principle didn't cover it so

TraderStav posted:

As someone still learning development I'm finding the middle part of your statement a big grey area that I can't resolve the difference between actual technologies or satire. I'm actually hoping none of it is satire so I can sound like a Brit when I talk as I approach that level of competency.

knowing the terms isn't actually required. half the time you just have to vomit the same sequence at some other engineer, re-creating the solution you heard in their head with the context you lack. knowing things is optional, just work on repeating arbitrary strings of jargon

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Athas posted:

So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment?

If so, Nix sounds like a good choice as well.

No, it's the runtime environment. Being able to reproduce, share, and version a stable, known-good build environment is just a bonus.

Volguus
Mar 3, 2009

ratbert90 posted:

Yeah, but instead of going through all that trouble, you could have handed them a docker image that was built with a Fedora base package. Then they wouldn't care if it was Debian or not.

I was going through all that trouble anyway. If I am to piss around with docker, it would just make sense to spare others of the same fate. And I would have not been able to use a Fedora base image, but I would have had to use something that provides CUDA and tensorflow both (maybe it exists already, maybe not), so for me it wouldn't have been any gain whatsoever. And for them even less since they already have CUDA and tensorflow and nvidia drivers installed because of the work they do.

Essentially providing a docker image (can you run an X11 UI application from docker even? Maybe export the display and forward ports...) and telling them to run that would have been a disservice and a headache for absolutely everyone, myself included. Work for work's sake, finding a reason to run docker. This is pretty much the definition of "docker is the goal".

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Volguus posted:

One of my goals for one of my projects was to build a debian 9.4 deb package that can be installed on those computers that run debian 9.4 . I'm running Fedora on my workstation, they're running debian, therefore a custom build had to be made. Docker fit that purpose because I didn't want to create a debian 9.4 build VM. I was lazy. Docker was an excuse to not do it. The correct and proper way to go about it would have been to just add to the build system that particular environment.

The correct and proper way is to do more work for the same result?

OddObserver posted:

The talk of it being relevant to science reproducibility w/o any mention of numerical stability is also not inspiring --- unless somehow this sort of thing makes an algorithm not work (but that's what stable algoritms are there for?), the difference is near certainly way less than the measurement uncertainty of the input data.. unless there is something really special going on which needs lots and lots of digits of precision for Science(tm) reasons, in which case you probably shouldn't be doing stuff with doubles anyway?

Right, standard JavaScript is not suitable to some applications, but people really want to do those things in it because it's the hammer they know.

SupSuper
Apr 8, 2009

At the Heart of the city is an Alien horror, so vile and so powerful that not even death can claim it.
My takeaway from Docker is package managers were a mistake and everyone would rather deploy their applications like Windows DLLs.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Athas posted:

So is Docker mostly used to isolate/reproduce the build environment, and not really the runtime environment?

If so, Nix sounds like a good choice as well.

It's for both. I have multi-stage builds where the build is containerized using some given base container that contains the build toolchain, and then the result of the build (static binary or Python wheel or whatever) is transferred to a stripped-down container.

QuarkJets
Sep 8, 2008

Volguus posted:

Ok, please help me here, what kind of technologies does Docker allow you to work on that you would have been otherwise unable to do so? I thought about them for about 20 seconds now and I cannot come up with an answer. I must have a very poor imagination.

I don't think that there's any technology that absolutely needs docker, much like how there's not any nail that absolutely needs a hammer when a rock works well enough

QuarkJets
Sep 8, 2008

I don't see why anyone would need git, my convention of inserting version numbers into file names works well enough and anyone who thinks otherwise should be brought out back and shot. Now please excuse me, the nurses at my care home only permit one hour of shitposting per day

OddObserver
Apr 3, 2009

QuarkJets posted:

I don't see why anyone would need git, my convention of inserting version numbers into file names works well enough and anyone who thinks otherwise should be brought out back and shot. Now please excuse me, the nurses at my care home only permit one hour of shitposting per day

Never thought we would have VMS posting here!

Adbot
ADBOT LOVES YOU

Cervix-A-Lot
Sep 29, 2006
Cheeeeesy
Good god some of you people are insufferable cunts.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply