|
That may be the idea but it’s absolutely not how it works now and nobody has any plans to fix it so
|
# ? Jun 23, 2019 17:19 |
|
|
# ? May 17, 2024 15:19 |
|
Phobeste posted:That may be the idea but it’s absolutely not how it works now and nobody has any plans to fix it so That's computing in a nutshell, but at least it gives us a lot to talk about in this thread.
|
# ? Jun 23, 2019 17:20 |
|
Soricidus posted:Unaudited third party dependencies exist anywhere, yes, but only in the javascript world is it standard practice to pull in literally hundreds of distinct libraries before you write a single line of your own code. The attack surface is far, far larger. I'm confused by what your point is because that's exactly what I meant by the text you quoted.
|
# ? Jun 23, 2019 18:50 |
|
It should be a surmountable task to implement some static analysis in the test phase of a new build, which could alert you about significant changes, new web requests or similar.
|
# ? Jun 23, 2019 18:59 |
|
xtal posted:I think the idea, whether it exists in practice or not, is that those little libraries have more code review and cover more edge cases than the solution you'd write yourself, because they are open source and relied on by all these other projects. In a utopian future, we would all have a single left-pad or is-odd implementation that we decided was optimal as a community. That's a reasonable motivation to structure the code that way, but most of it should be in a big "extended standard library" project with clear central management. And you shouldn't need a separate download and dependency listing for each function.
|
# ? Jun 23, 2019 19:23 |
|
If it's going to be like that (tons of external libraries, one function per library etc) it sure would be nice to have some maintainers that vet packages. Something like debian and it's packages.
|
# ? Jun 23, 2019 19:29 |
|
xtal posted:In a utopian future, we would all have a single left-pad or is-odd implementation that we decided was optimal as a community. What? In a utopia I have to learn what implementation was decided on by a community for every stupid thing? If I want to add padding to a string, it might be my requirements for padding are very specific and easily described in code that I write myself. What type of utopia is this imaginary hellscape? I feel like the suggestion itself is so bad that I'm already in hell.
|
# ? Jun 23, 2019 19:47 |
|
Here is a talk done about that npm fiasco. https://www.youtube.com/watch?v=2cyib2MgvdM
|
# ? Jun 23, 2019 19:49 |
|
Nolgthorn posted:What? In a utopia I have to learn what implementation was decided on by a community for every stupid thing? If I want to add padding to a string, it might be my requirements for padding are very specific and easily described in code that I write myself. This is partly correct, and I encourage you to move it toward completely correct. If you have special requirements for string padding that aren't included in the One True Left Pad, send a patch. That would be a better idea than writing it yourself, forgetting the edge cases contributed by others in your same position, needing to write tests and maintain it and so on. I'll also call back to my earlier example: you needed to learn the implementation of the kernel, the drivers, the compiler. You didn't write your own microkernel because Linux is too bloated, too many randoms contributing, or whatever. For this argument, libraries have an unfairly negative view, and it's mostly because of the overhead of package management, which are implementation details. But libraries are just "other people's code" and you run a lot of other people's code, referred to by other names. The line that's drawn for what constitutes a library versus an OS or whatever is fairly arbitrary. xtal fucked around with this message at 20:17 on Jun 23, 2019 |
# ? Jun 23, 2019 20:13 |
|
What? In a utopia I have to learn what implementation was decided on by a community for every stupid thing? If I want to push an element to an array, it might be my requirements for pushing are very specific and easily described in code that I write myself. What type of utopia is this imaginary hellscape? I feel like the suggestion itself is so bad that I'm already in hell.
|
# ? Jun 23, 2019 20:24 |
|
CS is so powerful because everything is built on abstractions. Abstracting common utility functions instead of writing them yourself is just one example. What I take issue with is that there are so many abstractions of the same thing that are just minutely different. I would really prefer the universe where common abstractions are made core JS or officially sponsored libraries.
|
# ? Jun 23, 2019 20:27 |
|
Nolgthorn posted:What? In a utopia I have to learn what implementation was decided on by a community for every stupid thing? If I want to add padding to a string, it might be my requirements for padding are very specific and easily described in code that I write myself. I feel like you're deliberately taking the least charitable view of their point. I mean, do you seriously think they were suggesting you would not be able to write your own left-pad?
|
# ? Jun 23, 2019 20:45 |
|
Insisting that some dipshit library isn't that different from an OS seems like a horror.xtal posted:That's computing in a nutshell, but at least it gives us a lot to talk about in this thread.
|
# ? Jun 23, 2019 21:44 |
|
qsvui posted:Insisting that some dipshit library isn't that different from an OS seems like a horror. require(‘cyan-ansi’)
|
# ? Jun 23, 2019 22:29 |
|
xtal posted:
But nobody is advocating for this. Nobody is reducing the complexity of all of the poo poo on node. The key difference is that when some random one liner thing is contributed to the kernel, people look at it, because the kernel is an actual monolithic entity with oversight. It doesn't have a thing where some random downstream library adds stuff and it filters out, and people read all the code. The problem with the way npm works is that the combination of the small size and complexity of a given library and the sheer number of said libraries leads to a self reinforcing cycle of "oh i skimmed its readme and maybe a couple files and I'm an experienced dev, I know how I'd do this even though I don't have the time to and I assume it's good so I'll slurp it on up". No consistent code review, and you follow this path: easy to make small library -> easy to use small library -> people keep libraries small -> there's a shitload of them -> i should first look for tiny packages because why reinvent even the tiniest wheel -> Where the gently caress did all this code come from?
|
# ? Jun 23, 2019 22:43 |
|
xtal posted:This is partly correct, and I encourage you to move it toward completely correct. If you have special requirements for string padding that aren't included in the One True Left Pad, send a patch. That would be a better idea than writing it yourself, forgetting the edge cases contributed by others in your same position, needing to write tests and maintain it and so on. Part of the difficulty here is that the more simple and pure you get, the more difficult your special cases can be. If you have the need to use is-odd, you probably have a use case where you want to consider what a non integer or Infinity input mean in your case. The micro definition where strings are even helps nobody.
|
# ? Jun 23, 2019 23:36 |
|
xtal posted:This is partly correct, and I encourage you to move it toward completely correct. If you have special requirements for string padding that aren't included in the One True Left Pad, send a patch. That would be a better idea than writing it yourself, forgetting the edge cases contributed by others in your same position, needing to write tests and maintain it and so on. There is absolutely no reason to cover every single edge case. I know what my code is doing and what to expect out of it and if the requirements change I can easily change my code. There are very few packages that are justified such that they fulfil a relatively complex yet similarly specific task. All the rest of it is people looking for projects to contribute to instead of building something.
|
# ? Jun 24, 2019 03:25 |
|
There should just be one large function implemented on npm that handles all possible edge cases of all possible behaviors a function should have, and as long as you import and call that function the right way your program is done no matter who you are or what you're making
|
# ? Jun 24, 2019 03:44 |
|
*five minutes later* we regret to inform you that the function is a switch statement between 2000 different 5-line functions written by russian cybercriminals
|
# ? Jun 24, 2019 03:51 |
|
brb reimplementing eval in a way that's fundamentally incompatible and more annoying to use pull requests to make everything else use it coming soon
|
# ? Jun 24, 2019 03:58 |
|
Jabor posted:brb reimplementing eval in a way that's fundamentally incompatible and more annoying to use way to reinvent JSON.parse()
|
# ? Jun 24, 2019 04:49 |
|
xtal posted:You didn't write your own microkernel because Linux is too bloated, too many randoms contributing, or whatever. I mean, I might've? Even with the rt patches, rt Linux is a joke.
|
# ? Jun 24, 2019 06:40 |
|
Suspicious Dish posted:Part of the difficulty here is that the more simple and pure you get, the more difficult your special cases can be. Apart from the fact that most of these packages are so shoddily written they *don't* check their inputs, micropackages are probably going to be worse for performance in the long run for all the extraneous work they have to do. It's like having a language that inserts bounds checks on all array accesses! Surely nobody would use that either.
|
# ? Jun 24, 2019 09:47 |
|
1. make some random utility library in your spare time 2. get busy, solicit for maintainers, give publish permissions to some guy you talked to once on gitter 3. all your dependents get owned
|
# ? Jun 24, 2019 17:39 |
|
brap posted:1. make some random utility library in your spare time surely that couldn't happen
|
# ? Jun 24, 2019 21:03 |
|
dick traceroute posted:surely that couldn't happen Possibly , but: https://github.com/dominictarr/event-stream/issues/116
|
# ? Jun 25, 2019 03:34 |
|
What I'm getting here is that having dependencies isn't the problem, updating dependencies is.
|
# ? Jun 25, 2019 05:36 |
|
That issue has 666 comments hail satan
|
# ? Jun 25, 2019 05:40 |
|
Vanadium posted:What I'm getting here is that having dependencies isn't the problem, updating dependencies is. I feel like not having proper versions on deps in JS is the real problem. My impression is that builds always just fetch the latest commit in a GitHub repo. I don’t do any JS dev so correct me if I’m wrong.
|
# ? Jun 25, 2019 06:03 |
Janitor Prime posted:I feel like not having proper versions on deps in JS is the real problem. My impression is that builds always just fetch the latest commit in a GitHub repo. I don’t do any JS dev so correct me if I’m wrong. there is versioning but the way npm handles it on the package consumer side is extremely bad basically you set your tolerance for "how big" an update can be before you have to manually change the version string in your dependency list. you can set a specific version - "3.2.6" - or put special characters before the version number to allow for autoupdates. "~3.2.6" will grab 3.2.7 if it exists but not 3.3.0. "^3.2.6" denotes that you're fine with automatically grabbing 3.3.0 but not 4.0.0. if you just write "3" that means you really want to gently caress your poo poo up so npm grabs the latest version as long as it's 3.0.0 or higher this seems halfway acceptable until you remember that version numbers are entirely arbitrary and also that npm defaults to inserting dependencies as "^3.2.6" Jazerus fucked around with this message at 06:53 on Jun 25, 2019 |
|
# ? Jun 25, 2019 06:37 |
|
Semver was an interesting idea worth trying but I don't think it can really be declared a success.
|
# ? Jun 25, 2019 07:56 |
|
SemVer is a good effort, but it really needs tool support. Why doesn't Some Program decide the version number based on my changes? It also needs humans to stop attaching emotional and branding value to version numbers.
|
# ? Jun 25, 2019 08:35 |
|
I'd like point/minor/major version numbers to mean "you should be able to update to this with no changes unless something goes wrong", "you might have to spend half an hour changing some stuff to get this working", "updating to this will be a gigantic PITA"
|
# ? Jun 25, 2019 08:53 |
|
That's what it's supposed to mean. In practice, people put major breaking changes in point releases, and bump the major version number for purely marketing reasons, even though they claim to be doing semantic versioning. People are stupid.
|
# ? Jun 25, 2019 09:00 |
|
We might as well just get rid of "0.x" and "1.0" because nothing can truly be considered "finished" anymore and half the packages the web depends on are beta.
|
# ? Jun 25, 2019 09:22 |
|
Athas posted:It also needs humans to stop attaching emotional and branding value to version numbers. Version numbers are emotional, branded values. Or rather, they are pseudo-numeric, cultural, linguistic expressions, they don't adhere strictly to any mathematical law. Books used to be published with "first edition, second edition" etc. At least that follows the rules of counting, but you don't violate anything but the sensibilities of some users by going from version 3.0.6.001beta to version 5 and from there to version XP Pro. It's just a classic case of confusing something qualitative with something quantitative. And importantly, there's nothing stopping you from updating the version number with an innocuous amount after having injected malware into it. If web developers can do automatic updates and builds, it should also be possible to do automatic security analysis. The version number is just some metadata to add to the report, not something you can reason or calculate with.
|
# ? Jun 25, 2019 09:33 |
|
Every tool I've worked with separates the numerical version (four integers separated by dots) from the human-readable version ("2.0 Turbo Pro Edition - BETA"). However as long as the human-readable version has some numbers in it that look lke versions, it would be masochistic to have those numbers be entirely unrelated to the numerical version. And vice-versa, I can't think of a software that managed to completely get rid of version numbers on the human side (even macOS is called "10.14 Mojave" or whatnot), because how does a customer then figure out which version is the newest? Shame Roman numerals are so '90s. Our product has faced the bloodshed so the pseudo-semver convention in use is: { UI refresh }.{ Database migration }.{ Feature }.{ Patch }
|
# ? Jun 25, 2019 11:02 |
|
NihilCredo posted:However as long as the human-readable version has some numbers in it that look lke versions, it would be masochistic to have those numbers be entirely unrelated to the numerical version. What I'm getting at is that the bolded bit is a fallacy. There is no such thing as a numerical version, really. The version number is a word, not a number, which is synonymous with the (only slightly more) human readable named version. Because software developers are numerically oriented people, and software versions appear as discrete, countable units, they've adopted a naming convention stylized as numbers (or pseudo-numbers, I don't know of any number system which uses multiple decimal points). Which in itself is perfectly fine of course. You could also name them alphabetically after Scooby-Doo characters or whatever. Also perfectly fine, although it would violate the pseudo-numeric convention which is by now well established. However, what you cannot do, is give any truth value to any mathematical reasoning based on version numbers, which you might be inclined to do given its numerical appearance. There is no logical guarantee that version 1.2.3.4 is different from 1.2.3.3, or that version 2 is newer than version 1, or that the changes in 2.2.0.1 are small compared to version 2.2, and big compared to 1.0. It's probably true, given the convention, but not true like 1+1=2. So by setting some npm update rule that says "auto update the small changes, hold on the big ones" based on comparing version numbers, you are not really relying on mathematical reasoning but you're relying on the person publishing the dependency sticking to the naming convention and fingers crossed, that he's not out to do something bad to you and your web shop customers.
|
# ? Jun 25, 2019 11:36 |
|
well yeah everyone admits that these things are conventions that require trust. afaict nobody is anywhere close to advocating for a mathematical proof of version numbers. social solutions are simpler and often better than complex technical ones.
|
# ? Jun 25, 2019 11:50 |
|
|
# ? May 17, 2024 15:19 |
|
Suspicious Dish posted:well yeah everyone admits that these things are conventions that require trust. afaict nobody is anywhere close to advocating for a mathematical proof of version numbers. social solutions are simpler and often better than complex technical ones. Agree on all points, it's just that the convention is so baked in to a numerically oriented culture, many people rely on those numerical assumptions. For the most part, no problem. But once you get into security stuff, problem.
|
# ? Jun 25, 2019 11:53 |