|
"Peer-to-peer" was a dot-com-bubble thing after the explosion of Napster, where a bunch of companies did exactly this stupid idea (no need to buy servers!!!), but it was quickly found by basically everyone involved that client connections are worse than rubbish, upload speeds are bad, their normal computers are really slow, so you need servers anyway. Skype tried to solve this by closely monitoring every peer in the network, and upgrading really consistent client connections to "super-peers". But this resulted in some hilarious big outages when super-peers went down at the same time. Microsoft seems to have wiped the old Skype blogs off the planet, but there are several outages that are just "oops turns out our super peers were just some phones and we think the battery ran out on them, your calls might be a bit rocky now" I just think it's hilarious that Yahoo! (yodel noise) got snookered into peer-to-peer hype, 10 years after it died.
|
# ? Aug 27, 2019 17:21 |
|
|
# ? Jun 8, 2024 06:48 |
|
Wasnt the peer to peer boom also a response to the fact that content abruptly grew in size, but bandwidth costs were slow to drop, so there was a period where it really was cost prohibitive for most companies? As in... There was some logic to it? Unlike the current tidal wave of "my startup is making a realtime chat app built on blockchain tech for conversation integrity!"
|
# ? Aug 27, 2019 20:02 |
|
Suspicious Dish posted:Skype tried to solve this by closely monitoring every peer in the network, and upgrading really consistent client connections to "super-peers". But this resulted in some hilarious big outages when super-peers went down at the same time. Microsoft seems to have wiped the old Skype blogs off the planet, but there are several outages that are just "oops turns out our super peers were just some phones and we think the battery ran out on them, your calls might be a bit rocky now" And every time this comes up on HN, the they claim it was because this is how Microsoft installs the back doors. No, it's much more simply that the old architecture didn't scale at all, sorry!
|
# ? Aug 27, 2019 20:36 |
|
Suspicious Dish posted:Skype tried to solve this by closely monitoring every peer in the network, and upgrading really consistent client connections to "super-peers". But this resulted in some hilarious big outages when super-peers went down at the same time. Microsoft seems to have wiped the old Skype blogs off the planet, but there are several outages that are just "oops turns out our super peers were just some phones and we think the battery ran out on them, your calls might be a bit rocky now"
|
# ? Aug 27, 2019 23:06 |
|
Skype was used for gaming chat between the eras of TeamSpeak and Discord, and you could query anybody's IP address by their public username, then DDOS them during important matches. Good timesCuntpunch posted:Wasnt the peer to peer boom also a response to the fact that content abruptly grew in size, but bandwidth costs were slow to drop, so there was a period where it really was cost prohibitive for most companies? Well, it still is cost prohibitive for most companies, which is why everything hosted online is through a megacorp. P2P went away in part because of the security issues like the one I mentioned above, but mostly that it's more complicated and harder to build. You make more profit with centralized infrastructure and fixing the scale issues by throwing (ideally VC) money at them instead. xtal fucked around with this message at 03:49 on Aug 28, 2019 |
# ? Aug 28, 2019 03:10 |
|
Absurd Alhazred posted:I mean, peer-to-peer content distribution isn't that bad of an idea. It's like realtime bittorrent! The various illegal IPTV services out of China still use real-time bittorrent approach.
|
# ? Aug 28, 2019 12:59 |
|
This probably doesn't qualify as a coding horror but I'm going to post it anyway. I work at a big tech company. We have a division that installs our software at customer sites, and another division that supports the software once it is live. Shouldn't be a surprise that Excel sees pretty wide use for various purposes (project management, ad-hoc trackers, random data analysis, etc). I would expect that through working here you'd pretty quickly pick up a basic understanding of Excel. Over the past week, I've met two people who have worked here for 7 years and I think are top contenders for the "most proficient at excel" award:
#2 in particular is hilarious to me because it seems like a hamfisted parody of the "roll your own CSV parser" that seems to come up in tech circles.
|
# ? Aug 28, 2019 14:47 |
|
My previous company - the myth: "we're doing Machine Learning to track which routes are profitable, so that our network of connections adapts itself to demand"; the reality - meet Bob, Bob will download a database as a CSV, open it on a custom purchased machine that has loving water cooling and all the fancy specs to handle the amount of data that has to be imported into Excel, make alterations to what the routes should be on given days and how should they be priced, then will upload it to the server. That's my most recent Excel related horror story canis minor fucked around with this message at 16:04 on Aug 28, 2019 |
# ? Aug 28, 2019 15:55 |
|
Excel is too powerful and access to it should be restricted
|
# ? Aug 28, 2019 16:48 |
|
Excel is a tough one for me because I end up using it like once every month or two which is just enough time to forget all the minutia of like how to make a cell reference row-bound vs column bound and so on. This makes it always a slightly frustrating experience to use, which makes me avoid it and so I never get those details ingrained.
|
# ? Aug 28, 2019 17:07 |
|
Just push F4 and then gently caress with the $ signs if you only want row or column
|
# ? Aug 28, 2019 17:09 |
|
Excel is crying out for a strict mode.
|
# ? Aug 28, 2019 19:07 |
|
Munkeymon posted:Excel is too powerful and access to it should be restricted
|
# ? Aug 28, 2019 20:00 |
|
From HN:https://news.ycombinator.com/item?id=20818398 posted:Apples libc used to shell-out to perl in a function: https://github.com/Apple-FOSS-Mirror/Libc/blob/2ca2ae74647714acfc18674c3114b1a5d3325d7d/gen/wordexp.c#L192
|
# ? Aug 28, 2019 20:51 |
https://twitter.com/Parent5446/status/1166179218188881920
|
|
# ? Aug 28, 2019 22:23 |
|
canis minor posted:My previous company - the myth: "we're doing Machine Learning to track which routes are profitable, so that our network of connections adapts itself to demand"; the reality - meet Bob, Bob will download a database as a CSV, open it on a custom purchased machine that has loving water cooling and all the fancy specs to handle the amount of data that has to be imported into Excel, make alterations to what the routes should be on given days and how should they be priced, then will upload it to the server.
|
# ? Aug 29, 2019 00:35 |
|
amazing how goatkcd is so apt so often
|
# ? Aug 29, 2019 09:56 |
|
I'd love it if someone could confirm that the below way of doing this is crazy and a true coding horror. We have an application that fetches images from a hard drive that go with a form. Each form has a unique ID and each image is named according to the ID (so ID.jpg). All the files exist in the same folder, either in the root or a subfolder which can be easily parsed from the ID. When fetching these files, the application doesn't use a path to the file, but instead calls the terminal's find command. The folder has (at the moment) about 700 000 images in it and this is likely to incease to about a million sooner rather than later. Is there any sane reason to use find. Why not just open the file with a path to the root and then the clustered folder if that fails, or the other way around since most of the files are clustered. This seems to be a huge loving waste of resources.
|
# ? Aug 29, 2019 10:07 |
|
Kuule hain nussivan posted:I'd love it if someone could confirm that the below way of doing this is crazy and a true coding horror. have you asked your coworkers why it does it that way?
|
# ? Aug 29, 2019 10:14 |
|
Hammerite posted:have you asked your coworkers why it does it that way?
|
# ? Aug 29, 2019 10:24 |
|
Kuule hain nussivan posted:Closed source, outside supplier. When I asked, I got "It is how the system works" as an answer from a senior developer. oh ok. They are probably idiot hell fuckers, op.
|
# ? Aug 29, 2019 10:28 |
|
Hammerite posted:oh ok. They are probably idiot hell fuckers, op.
|
# ? Aug 29, 2019 10:53 |
|
If you just learnt it, flaunt it all over the loving project base.
|
# ? Aug 29, 2019 10:57 |
|
Kuule hain nussivan posted:I'd love it if someone could confirm that the below way of doing this is crazy and a true coding horror. this sometimes happend and could be caused by unforeseen changes in a application the solution is easy, I think, either organize these photos in years 2018/14324234.jpg 2019/23232442.jgp, or using the first two numbers 14/4324234.jpg 23/232442.jpg most filesystems will start to break with that many files, having operations taking longer and longer. tryiing to check the folder with a graphic OS would probably put it into a long read operation
|
# ? Aug 29, 2019 12:17 |
|
suggest they store the image binary in a local mysql database running on your local gaming laptop
|
# ? Aug 29, 2019 14:14 |
|
honestly it is a weird way of doing it but it's not uncommon for weird poo poo like that to stick around long after requirements have changed. The important question is does it matter? Is it causing a problem? If it's not but an incoming change causes you to think it might be a problem, it's part of your job to optimise for that newly introduced problem. But yeah, I'm trying hard to be kind here just in case.
|
# ? Aug 29, 2019 14:34 |
|
Kuule hain nussivan posted:I'd love it if someone could confirm that the below way of doing this is crazy and a true coding horror. On a Unixlike that won't scale very well, the time to walk the inodes increases greater than lineally. A million years ago this used to be a big problem on news servers as each message would be a unique file in the directory with the path corresponding to the newsgroup name. (IE: /var/spool/usenet/alt/binaries/linux) The more active the newsgroup the longer it would take to determine if you had a particular message. I did a hack to the linux kernel that would let open() operate on a substring inode:%d where if it matched it'd just open that inode directly rather than try to treat it as a filename. That was a 20x speed increase at the time and would have gotten better with retention. If you want to be a hero and have good tests put a shellscript 'find' in the path that checks if it's being called by your app and if so checks if the file exists at the right spot and return that and if not exec to the real find.
|
# ? Aug 29, 2019 14:59 |
|
Hughlander posted:
If it's causing problems, propose a change to the implementation. If it's not causing problems, leave it alone and do something productive instead
|
# ? Aug 29, 2019 17:42 |
|
Hughlander posted:If you want to be a hero and have good tests put a shellscript 'find' in the path that checks if it's being called by your app and if so checks if the file exists at the right spot and return that and if not exec to the real find. tak posted:If it's causing problems, propose a change to the implementation. If it's not causing problems, leave it alone and do something productive instead I loving hate our software provider. Edit: I seriously can't overstate how bad they are. A simple issue where copying a form didn't empty some text fields which it should have took half a year to fix. HALF...A...loving...YEAR! Kuule hain nussivan fucked around with this message at 17:53 on Aug 29, 2019 |
# ? Aug 29, 2019 17:50 |
|
Kuule hain nussivan posted:
You didn't dump them over that so they're putting in exactly the level of support that your organization will tolerate. WRT using find, does it specify the exact path or does it use find with -name or grep? If the latter, I guaran-loving-tee it's because Karen at some other customer company wants to nicely arrange all of the files for all of the forms into their own folders, and this is their solution. Edit: also their "stress test" if they have one involves maybe 1000 files, it's fast on the dev machine so what's the problem?????? Volmarias fucked around with this message at 18:14 on Aug 29, 2019 |
# ? Aug 29, 2019 18:12 |
|
Volmarias posted:You didn't dump them over that so they're putting in exactly the level of support that your organization will tolerate. I've been saying for the past year or so that we just need to stop loving around and say that since they don't meet the support levels specified in the contract, we won't pay for support. The problem is that the rest of the team (it's a small team with only two technical people and two field specific people in it) are being pansies about it. It doesn't help that when the contract was written back before I even joined the organization, the support levels were specified by the provider. So only critical errors, which in this case means that the app is unreachable) have an actual time to fix alloted to them. Everything else is given a vague time limit, like "next update". So basically, they can postpone the deadline forever by just not delivering any updates to us. They don't do this because they don't give a poo poo about support levels, but they theoretically could. It's find with -name, which is possibly limited to the uppermost level of clustering in some cases. I'd have to double check. But considering their track record, I always assume the worst. And I wish they did even a simple stress test on this thing. I once spent 3 weekends in a row doing updates which never did anything because these fucks don't test poo poo. When I brought it up, I was told that they have very rigorous tests for their updates and told me to show any proof of updates that didn't do anything. When I offered to look them up I was told "Well that's not going to be constructive at all". Edit: But enough about me raging. I'll try and look up some actual horrors for my next post. Shouldn't be hard!
|
# ? Aug 29, 2019 18:47 |
|
Sup Wrote The Goddamned Tests For The Vendor Because They Didn't buddy. I had to deal with that in my last job, and their support portal only worked in MSIE (unless you changed the user agent) so that was really the first clue of what you were getting. The only reason that we had gone with them was that their demo code actually worked. Most of the rest didn't even compile, let alone work.
|
# ? Aug 29, 2019 19:46 |
|
Probably a benign WTF rather than a horror per se. Just learned while idly looking down the questions on Stack Overflow that PHP now has nullable type declarations, a bit like C# and probably other languages... but you have to put the ? at the start of the type name, e.g. ?int I wonder whether they had to put it at the start because putting it at the end would be a breaking change in their crazy frankenstein language because it would be parsed as a ternary operator or something
|
# ? Aug 29, 2019 20:13 |
|
FUN FACT: "PHP" is actually a recursive acronym. It stands for "Pretty Horrible PHP"
|
# ? Aug 29, 2019 20:16 |
|
Hammerite posted:Probably a benign WTF rather than a horror per se. Just learned while idly looking down the questions on Stack Overflow that PHP now has nullable type declarations, a bit like C# and probably other languages... but you have to put the ? at the start of the type name, e.g. ?int https://news-web.php.net/php.internals/92276 "It's better to use ? position before the type, to reduce fragmentation with HHVM." so blame hhvm i think
|
# ? Aug 29, 2019 22:00 |
|
Hammerite posted:I wonder whether they had to put it at the start because putting it at the end would be a breaking change in their crazy frankenstein language because it would be parsed as a ternary operator or something You might wanna go back over the PHP7 RFCs, NikiC rewrote the entire parser. None of the things that used to be "we can't do it because the parser is dumb" are still valid.
|
# ? Aug 30, 2019 06:40 |
|
McGlockenshire posted:You might wanna go back over the PHP7 RFCs, NikiC rewrote the entire parser. None of the things that used to be "we can't do it because the parser is dumb" are still valid. You flatter me (wait, or do you?) by suggesting I might have any recent familiarity with PHP. I was just making an uninformed "lol PHP sucks" shitpost.
|
# ? Aug 30, 2019 09:46 |
|
My problem with PHP is that dont have a personality. For a long time the devs of PHP copied things from Java. Now it seems the inspiration is the good parts of Javascript. The original inspiration of PHP seems C and Perl with bits of C++. In some parts, the language devs where not organized enough or up to the task. The language have the elegance of a universal TV remote, unfortunally is usefull and hard to replace. Maybe things get better, if Javascript can redeem itself, maybe Php too.
|
# ? Aug 30, 2019 11:11 |
|
Tei posted:Maybe things get better, if Javascript can redeem itself, maybe Php too. if
|
# ? Aug 30, 2019 14:15 |
|
|
# ? Jun 8, 2024 06:48 |
|
Can someone sanity check me on this? We upload a file to a customer daily that looks like this:code:
quote:We’re unable to get daily data from your file because we can’t overwrite our old copy of your file since the name is different cause of the timestamp. Can you please remove the timestamp in the below file? First of all, that is not our file. They’re loving like, processing our file and deriving something from it. That’s their goddamn responsibility, not ours. Every one of our customers get the file with the timestamp and might depend on it, we’re not going to implement special casing just for you. Our responsibility is to put the file in your server in the specified location with the specified file name prefix and that’s it. My question is this: is it unreasonable to put our foot down and say that we’re not going to change functionality just cause they’re too incompetent to strip a timestamp from a file name? Because we’re not going to do their engineering work for them.
|
# ? Aug 30, 2019 17:35 |