|
That reminds me of the time where I had to fix a compiled binary using only sed and io redirection.
|
# ? Nov 19, 2013 05:39 |
|
|
# ? Jun 8, 2024 13:00 |
|
rrrrrrrrrrrt posted:How many of the people claiming to "only" use a text editor have enough plugins loaded up to effectively make them IDEs? Even when I'm using Sublime or Emacs I usually have enough features loaded in (incremental compilation, REPLs, syntax highlighting, goto def, snippets, tags, etc.) that they're effectively mini-IDEs. I'm really lazy and I use a stock text editor, usually configured with the indentation settings of the language i'm using. I'm too much of a curmudgeon to use syntax highlighting. If I'm using something like Java though, I will use an IDE. I tend to write very small programs, in smaller discrete chunks, so I thankfully haven't had to use bells and whistles to navigate around a project. Other peoples code makes we wish I had an IDE, well wish I didn't have to deal with it.
|
# ? Nov 19, 2013 05:59 |
|
Thermopyle posted:I'm open to being talked out of this, but my position is that people who think dynamic language IDE's don't do Feature X from their favorite static language IDE just haven't surveyed the field very well. There's not really a way to have this discussion without turning it into a static-vs-dynamic shitfest, sadly. The main thing I use an IDE for in C#/Java, is for that instant feedback that the thing I just typed will actually compile. And "it compiles" in perl is kinda worthless, because misnamed methods in perl compile just fine. So now you're forced to try and use on the fly eval of code to guess if something is legit, but this isn't always possible - what if I'm relying on loading in some external resource to determine the name of that method? Anyways - this was a real problem I had and I just gave up. There wasn't any way to guarantee that when I typed $obj->do_thing that it was a method that existed, and that's really all I want out of an IDE. All the other refactoring tricks are just gravy built on top of that one basic feature.
|
# ? Nov 19, 2013 14:31 |
|
My first ide was MPLAB which is a horrible, terrible program, and the person (people) who made it should commit Seppuku because of the dishoner they brought their families. Then I moved on to MPLABX which was way better. Now days I program in C/C++ mainly, and Eclipse-CDT is quite nice, especially with remote GDB. Sure I like VS, but Eclipse is more simple and gets the job done easier in 99% of my cases.
|
# ? Nov 19, 2013 21:36 |
|
Java code:
E: here's another, even worse Java code:
Zaphod42 fucked around with this message at 23:54 on Nov 19, 2013 |
# ? Nov 19, 2013 23:29 |
|
Zaphod42 posted:Eclipse really doesn't like that formatting. The first } looks like it closes the if(cursor != null) and it looks like the if(...) is unmatched, but nope! Second } matches the while and the if has an implicit one-line. The actual closing brace for the earlier if is way down out of view. The Zen of Python posted:Explicit is better than implicit. Pythagoras a trois fucked around with this message at 05:39 on Nov 20, 2013 |
# ? Nov 20, 2013 01:24 |
|
So the horror is people don't know how to indent properly (move both } over a tab and then they align properly?)? Just checking here.
|
# ? Nov 20, 2013 01:24 |
|
The implicit if statement is the horror, brackets just muddy the waters. I think.
|
# ? Nov 20, 2013 01:31 |
|
BigRedDot posted:You'd be amazed what you have to resort to when you are forced to code in a locked room on an airgapped network. are we talking the government "this room hasn't been cleaned in a decade because NOTHING comes out" type rooms.
|
# ? Nov 20, 2013 01:39 |
|
Master_Odin posted:So the horror is people don't know how to indent properly (move both } over a tab and then they align properly?)? Just checking here. It's trying to be clever and treat the if/while as a single thing but fails to do that correctly, and it's completely pointless to begin with since you could just write while (cursor != null && cursor.hasNext()) and eliminate the if entirely.
|
# ? Nov 20, 2013 01:55 |
|
Plorkyeran posted:It's trying to be clever and treat the if/while as a single thing but fails to do that correctly, and it's completely pointless to begin with since you could just write while (cursor != null && cursor.hasNext()) and eliminate the if entirely.
|
# ? Nov 20, 2013 04:12 |
|
This is a snippet from an 800+ line file full of jquery I recently had to refactor at work: https://gist.github.com/justin-edwards/bbb85413c18dea60a5b5 Yes, that is a nearly 100 line switch statement on the value of true. edit: I should point out that part of the tragedy of this is it was written by a coworker of mine about a month ago. This isn't some years old file written by an intern or something, the guy has a masters. I replaced the 800 lines of jquery with ~400 lines of angular. TheSleeper fucked around with this message at 05:28 on Nov 20, 2013 |
# ? Nov 20, 2013 05:25 |
|
TheSleeper posted:This isn't some years old file written by an intern or something, the guy has a masters. I've found that people with Masters degrees in CS are actually worse than people with just their Bachelors. Just because he has his Masters doesn't mean he's not just as competent as an intern.
|
# ? Nov 20, 2013 05:32 |
|
code:
|
# ? Nov 20, 2013 05:38 |
|
Plorkyeran posted:It's trying to be clever and treat the if/while as a single thing but fails to do that correctly, and it's completely pointless to begin with since you could just write while (cursor != null && cursor.hasNext()) and eliminate the if entirely. I hate this idiom because I see this all the time: Java code:
By contract, Statement#executeQuery never returns null.
|
# ? Nov 20, 2013 06:13 |
|
code:
|
# ? Nov 20, 2013 06:46 |
|
TheSleeper posted:This is a snippet from an 800+ line file full of jquery I recently had to refactor at work: https://gist.github.com/justin-edwards/bbb85413c18dea60a5b5 I'm not a fan of it, but this isn't really a crazy-unusual thing to do in languages that support it.
|
# ? Nov 20, 2013 06:54 |
|
I'm baffled as to why anyone would want to find the maximum value of an integer by counting until you can count no more. The comments are full of good suggestions too!
|
# ? Nov 20, 2013 11:04 |
|
redleader posted:I'm baffled as to why anyone would want to find the maximum value of an integer by counting until you can count no more. The comments are full of good suggestions too! Some Guy posted:
Who in their right mind is inserting ~3.75 million records into a DB every day? That's ~150,000 an hour. ~2500 a minute. ~43 a second. Inserting 43 records a second into a single database every single day for 18 months. Edit: vvv Wouldn't adding to a database be for unique things like registration or per-user data? 3.75 million unique users every day? I can't think of what you'd need to store in a database that would require you to create a new row for data created that often. Jewel fucked around with this message at 12:53 on Nov 20, 2013 |
# ? Nov 20, 2013 11:32 |
|
That's hardly an astronomical figure.
|
# ? Nov 20, 2013 12:39 |
|
Jewel posted:Edit: vvv Wouldn't adding to a database be for unique things like registration or per-user data? 3.75 million unique users every day? I can't think of what you'd need to store in a database that would require you to create a new row for data created that often. "Users" is the only unique thing you can think of? How would go about storing, say, "Forum posts"?
|
# ? Nov 20, 2013 13:09 |
|
Jabor posted:"Users" is the only unique thing you can think of? Flat files. Why do you ask?
|
# ? Nov 20, 2013 13:51 |
|
Jabor posted:"Users" is the only unique thing you can think of? Encode then into QR and store the jpegs
|
# ? Nov 20, 2013 14:19 |
|
Don't store them at all because you are posting is bad.
|
# ? Nov 20, 2013 15:15 |
|
Jewel posted:
At a past job, we had a production process where there were maybe two dozen steps done to an inventory item, from start to finish. Every step generated a database record recording who did what step to what item and when. Some of these were manually initiated by employees, and some were generated with automated equipment that ran 24/7. As we added more automation we were up to a throughput of about 8 million items a month going through the pipeline. Every one of these items was physically unique and had to be treated as such, so it generated an individual record in the database for every item's step of the way. Which meant we were definitely pumping that kind of throughput into this database, just averaging out (8 million items * say 24 steps)/30 days in a month.
|
# ? Nov 20, 2013 15:44 |
|
That schema seems kind of suspect.
|
# ? Nov 20, 2013 16:00 |
|
Otto Skorzeny posted:That schema seems kind of suspect. No kidding. I'm trying to picture it and I just can't. What was the reasoning behind it?
|
# ? Nov 20, 2013 16:09 |
|
fidel sarcastro posted:No kidding. I'm trying to picture it and I just can't. What was the reasoning behind it? The items were DNA samples; 90% of the steps represented bunch of chemistry-related work that was unique and specific to each sample (primers, etc) and would be used to schedule the robots to do the right thing downstream, so we had to track at that level.
|
# ? Nov 20, 2013 16:18 |
|
Jewel posted:Who in their right mind is inserting ~3.75 million records into a DB every day? That's ~150,000 an hour. ~2500 a minute. ~43 a second. The electronic discovery industry crushes this number. A company I worked for loaded more than that daily, and there was always pressure to get that number up. Before anyone asks - it's because the entire process revolves around taking dumps of exported data (usually mail server archives and desktop images) and processing them into data that can be easily searched and accessed by a large number of lawyers. For legal reasons, the chain of custody within the company needs to be tracked, so every single email and every single document that gets transmitted gets at least one row in a database. We're talking multiple TB of email data at a time. Our bulk data loaders were burning 24x7. We had legions of people working shifts to just keep the processing scripts running. Maybe e-discovery is a horrors thread unto itself, though.
|
# ? Nov 20, 2013 16:24 |
|
kitten smoothie posted:The items were DNA samples; 90% of the steps represented bunch of chemistry-related work that was unique and specific to each sample (primers, etc) and would be used to schedule the robots to do the right thing downstream, so we had to track at that level. The confusion wasn't about why you needed to store who did what and when, but about why each event was its own record rather than a pair of columns or something. e: or, you know, several tables rather than one Blotto Skorzany fucked around with this message at 16:33 on Nov 20, 2013 |
# ? Nov 20, 2013 16:30 |
|
Otto Skorzeny posted:The confusion wasn't about why you needed to store who did what and when, but about why each event was its own record rather than a pair of columns or something. e: or, you know, several tables rather than one Yeah sounds like a case of using one big God table. We've got the opposite problem, people keep adding trivial things to the schema and now there's an absurd number of tables and relationships to keep track of.
|
# ? Nov 20, 2013 17:29 |
|
Otto Skorzeny posted:The confusion wasn't about why you needed to store who did what and when, but about why each event was its own record rather than a pair of columns or something. e: or, you know, several tables rather than one To clarify, we had several tables. We didn't, say, have one bigass table that got a new row re-recording all sample data for every event. Samples - things we have Event Types - things we do, in general Events - Things that happened; referencing an event type, who did it, when Sample Events - Sample ID, event ID, and a couple fields for stuff that was specific to that combination of sample and event invocation. While some steps were a 1:1 relationship to samples, the majority happened where a person took a block of 384 of them and put them on a robot deck, and the robot dealt with the things that mattered for each individual piece. No point in logging the "who did it, when, where" 383 more times than you have to. The sample event table was what grew at a rate of several million a day. The events table grew at a rate of ~1/384 that. kitten smoothie fucked around with this message at 17:36 on Nov 20, 2013 |
# ? Nov 20, 2013 17:30 |
|
Jewel posted:Who in their right mind is inserting ~3.75 million records into a DB every day? That's ~150,000 an hour. ~2500 a minute. ~43 a second. I was involved in an arcade-style gaming site that released 3 or so new games each day. Each game would have roughly 1.8 million unique users play it on their release day. We obviously stored people's high scores. So based on that alone we would be inserting ~5.4 million new high score records per day. We used a compound primary key though (game_id and user_id) so it was a non-issue.
|
# ? Nov 20, 2013 22:16 |
|
npe posted:Maybe e-discovery is a horrors thread unto itself, though. It is. At the intersection of the "Coding Horrors" and the "Started off Barrister, Ended Up Barista" threads.
|
# ? Nov 21, 2013 02:05 |
|
Jewel posted:
BI project I was on had 12 million rows an hour. I used to joke that one in a million things going wrong would happen 300 times a day. Of course we didn't use int32s as keys... Hughlander fucked around with this message at 03:41 on Nov 21, 2013 |
# ? Nov 21, 2013 03:23 |
|
Jewel posted:Who in their right mind is inserting ~3.75 million records into a DB every day? That's ~150,000 an hour. ~2500 a minute. ~43 a second. quote:Recently, something remarkable happened on Twitter: On Saturday, August 3 in Japan, people watched an airing of Castle in the Sky, and at one moment they took to Twitter so much that we hit a one-second peak of 143,199 Tweets per second. (August 2 at 7:21:50 PDT; August 3 at 11:21:50 JST)
|
# ? Nov 21, 2013 20:57 |
|
I don't get why, in the era of social networks, micro trading and other "big data", people are surprised that some business might have to do thousands of insertions per second into a database...
|
# ? Nov 22, 2013 00:46 |
|
Really any service that has a large amount of traffic that requires tracking as well... like advertising. We process ~12k requests a second on a single action that records every hit into a database. Granted, the hits don't stay forever... maybe 3 months? shodanjr_gr posted:I don't get why, in the era of social networks, micro trading and other "big data", people are surprised that some business might have to do thousands of insertions per second into a database... They've never worked on anything "at scale".
|
# ? Nov 22, 2013 00:48 |
|
I know some people who think database transactions are "heavy", and think that the world will collapse if they do more than 500 a second or something.
|
# ? Nov 22, 2013 01:18 |
|
|
# ? Jun 8, 2024 13:00 |
|
I kinda forgot about social media with that post, oops By the sound of the comment though, the commenter did not run Twitter or Tumblr or something of the sort, and I'm willing to bet what he was doing was horribly inefficient.
|
# ? Nov 22, 2013 04:07 |