|
I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now Let's party!
|
# ? Nov 1, 2013 16:10 |
|
|
# ? Jun 5, 2024 03:11 |
|
kitten smoothie posted:Please make that thread. I once heard a greybeard story about how they accidentally burned a hole through a cinder block wall by underflowing the power output value on the controller of a LASIK machine during testing. It shut off because it blew the breakers in the building. Think happy thoughts if you get your eyes fixed
|
# ? Nov 1, 2013 17:38 |
|
2banks1swap.avi posted:I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now I would love to do that at my workplace just to show how much dead code we have / utility things that nobody knows about. It's only that it would take three years in our current system.
|
# ? Nov 1, 2013 17:44 |
|
2banks1swap.avi posted:I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now You should respectfully request that your team start using proper project management tools. If you had tasks with changesets tied to them, you'd be able to find out what's not done very easily, and what code originated from what task.
|
# ? Nov 1, 2013 17:48 |
|
2banks1swap.avi posted:I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now Sounds like you get to delete code! That's my favorite thing!
|
# ? Nov 1, 2013 17:53 |
|
We had some busfactor going on - the guy who was doing a lot of these little helpers and web services did it semi-autonomously because he worked remote a lot and came in sporadically to his health. He has since had to leave indefinitely. We also in the same time frame had our team lead leave. What I'm trying to say is that I'll be the one doing this most likely. I'll have been a developer a whole year this month. Woo responsibility!
|
# ? Nov 1, 2013 17:54 |
|
Dren posted:Sounds like you get to delete code! That's my favorite thing! HAHAHA NO
|
# ? Nov 1, 2013 17:54 |
|
Bet there aren't tests to prove you don't break something important by removing some "dead" code.
|
# ? Nov 1, 2013 19:30 |
|
Munkeymon posted:I once heard a greybeard story about how they accidentally burned a hole through a cinder block wall by underflowing the power output value on the controller of a LASIK machine during testing. It shut off because it blew the breakers in the building. Think happy thoughts if you get your eyes fixed Oh, believe me, the thought going through my mind as the laser powered up and I could smell my corneas burning off was that I hoped to high heaven the people who wrote the control software were good developers and were paid enough to care.
|
# ? Nov 1, 2013 19:32 |
|
Kallikrates posted:Bet there aren't tests to prove you don't break something important by removing some "dead" code. Tests? What are those? We don't have time to write tests!
|
# ? Nov 1, 2013 20:07 |
|
Suspicious Dish posted:I sort of wonder if other engineering disciplines fight the same challenges: a mix of carelessness, bad engineers, and tight budgets mean that a ceiling falls somewhere. http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse Cheekio posted:You know, I'd think that a huge multinational like Toyota would have its act together, but again I am confronted with the folly of man. http://en.wikipedia.org/wiki/Patriot_missile#Failure_at_Dhahran http://www.defenseindustrydaily.com/f22-squadron-shot-down-by-the-international-date-line-03087/ Everything is terrible.
|
# ? Nov 1, 2013 23:21 |
|
Ender.uNF posted:Did you even bother to read my post? Not to get all Linus here but you're a fool. They didnt just fail at embedded engineering 101; they failed abysmally. They spent time and effort to do the opposite of what you should do. Worse than that, the watchdog was serviced by a hardware generated interrupt.
|
# ? Nov 2, 2013 07:15 |
|
Suspicious Dish posted:Sure, abstractions are leaky and if you're implementing any serious realtime system you need to understand the congestion/latency tradeoff. But that's something you can really only can be taught once you already understand TCP. Personally, I think the best way to teach network programming is to start with the network stack(either OSI or TCI/IP model). Nothing too in-depth, just a electrons->bits->frames->packets->streams sort of overview. A basic knowledge of lower level tech is often useful to keep them grounded conceptually, I do most of my coding in Python/Scala/Javascript/skinny jeans, pretty far from the metal but lower level concerns tend to reach up through the layers of abstraction and by having a basic idea of how the machine you're programming works, you'll have a context to understand things like interrupts and bit-twiddling and cache locality
|
# ? Nov 2, 2013 10:54 |
|
Suspicious Dish posted:Building your own packet protocol and bidirectional stream-of-bytes transport that opens a new listening socket on every message on top of TCP is crazy. TCP gives you exactly that. They clearly don't understand TCP well enough to poke start to under it. websocket
|
# ? Nov 2, 2013 11:29 |
|
PleasingFungus posted:http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse Thank you. This is awesome is reading.
|
# ? Nov 2, 2013 16:44 |
|
Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content?
|
# ? Nov 2, 2013 18:23 |
|
GrumpyDoctor posted:Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content? You can always use server sent events which basically long polls a specific URL and shits back content to your front end whenever you want it to. http://en.wikipedia.org/wiki/Server-sent_events
|
# ? Nov 2, 2013 18:52 |
|
GrumpyDoctor posted:Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content? Flash socket Long polling Multipart streaming Forever iframe JSONP polling (thanks socket.io page!)
|
# ? Nov 2, 2013 20:00 |
|
WebSockets are the best way. It's not great, but everything else sucks more.
|
# ? Nov 2, 2013 20:40 |
|
kitten smoothie posted:Please make that thread. Otto Skorzeny posted:What agency approvals did you guys have to get? We had to get FDA approvals to certify everything as medical devices, and we usually had to get local regulatory permits for making radiation. Sorry for the megapost. This is related to my experience working on medical particle accelerators for treating cancer. Part 1: The Legacy System The legacy system was really hard to maintain. It consisted of about 20 different applications that were built using an ancient "toolkit" which is no longer supported, and we only had the binaries for that toolkit. These 20 or so applications all communicated over a primitive TCP messaging system. These apps were mostly written in C and ran only on HP Unix (thanks, ancient toolkit). Some parts of the system were "required by law" to be done "in realtime" (at least that was the interpretation, anyway). So their solution was to use an ancient version of vxWorks for the safety-critical stuff. It was so old that it had no memory protection - bugs in one task could manifest themselves as erratic behavior in a different task. Additionally, the toolkit was not available for vxWorks, so they had written their own TCP system to get data back and forth between the vxWorks machines and the Unix messaging system. Adding new message types was a predictable source of problems, and would routinely break all or part of the messaging system. The legacy codebase made very interesting use of Makefiles. Each directory had a Makefile inside it - something I have never seen before or since. So it might look something like this: PROTON_SRC/ ├── Makefile └── src ├── app_1 │ ├── component_1 │ │ ├── class_1 │ │ │ ├── class_1.cpp │ │ │ ├── class_1.h │ │ │ └── Makefile │ │ ├── class_2 │ │ │ ├── class_2.cpp │ │ │ ├── class_2.h │ │ │ └── Makefile │ │ └── Makefile │ ├── component_2 │ │ ├── component_a │ │ │ ├── class_a.cpp │ │ │ ├── class_a.h │ │ │ └── Makefile │ │ ├── component_b │ │ │ ├── class_b.cpp │ │ │ ├── class_b.h │ │ │ └── Makefile │ │ └── Makefile │ └── Makefile ├── app_2 ... ├── app_3 ... └── Makefile Building the code was insane. You had to properly set bunches of environment variables, and no one had bothered to write a script to check your environment automatically (I eventually did). You could not build one app from it's directory, you could only build from the very top directory - probably due to relative paths in the Makefiles. The company had been installing about 1-2 of these big, expensive proton treatment centers per year since about 2000. So we were getting to the point where we had about 10 different centers. The problem was that there was not binary compatibility between the centers. Feature configuration was handled with #IFDEFs and #DEFINEs sprinkled through the codebase, which necessitated that each site add their own configuration through even more #IFDEFs, and build their own binaries. Worse still, some apps (the vxWorks ones) needed to be able to run with different configurations within each center. That configuration was all done with #IFDEFs, #DEFINEs and Makefile goofiness as well, so each vxWorks app had to be built three or four different times. For revision control, we used something called Clearcase. It's a very complex system that I actually kind of grew to like. It provided a sort of virtual filesystem, and it's own set of build tools. Instead of the familiar 'make && make install' that you'd find on a normal Linux system, you'd do something like this: code:
Improving things was made really difficult by the lack of a defined development process in the company. In Clearcase, you have the ability to create a real loving mess really quickly, and that's exactly what happened. So many branches were made, that changing even a single line of code was usually pretty painful. This image is one that I grabbed off Google of Clearcase: It basically shows the branching and merging of one single file. It was really common for us to have hundreds of branches and a thousand or so merges on a single file. So just multiply that image by about a hundred to get an idea of the clusterfuck that was present. For one single file. This complexity also made it essentially impossible to have any kind of automated nightly builds, because each site had such a radically different configuration of what branches they were using (Clearcase lets you use as many different branches in a single build as you want), and also because our developers would rename or delete branches, mix features between branches, introduce dependencies in one branch that were only satisfied in another branch, etc etc etc. Once you had built and installed the codebase, it was time to launch it. And what mechanism do you suppose was used for stopping and starting all those precious applications? That's right, more Makefiles! Essentially it all boiled down to: code:
code:
So, just to be clear, the radiation centers were owned by a university, hospital, or big healthcare company, and I worked for the company that manufactured everything. I was basically there to install, test and maintain "the software stuff". I became buddies with one of the physicists who worked for our customer, and one night he tracked me down. He was holding a sheet of special plastic that turns black in the areas where it's been exposed to radiation. He was panicked. I looked at the sheet for a couple seconds. Something was wrong with it. There was a gap that had not turned black, when I knew that it should have. I could feel the hair stand up on my neck, and my skin flush. "How... how did you make that?" I squeaked. "I just ran a [regular irradiation] and..." "Did you treat anyone like that?" I stammered. "I don't know..." It turns out they didn't kill anybody. But what happened is that the legacy system had transposed X and Y in the final beam scanning. Normally, the X and Y dimensions are about equal for the treatment plan, so you'd never notice. I went on to install another center in the Czech Republic, which took about two years. During that time, a new software architecture was introduced to replace the legacy system. After finishing the installation of that center, I left the company, and I'm back in the US now. But that's for another post. My Rhythmic Crotch fucked around with this message at 21:02 on Nov 2, 2013 |
# ? Nov 2, 2013 20:48 |
|
My Rhythmic Crotch posted:Each directory had a Makefile inside it - something I have never seen before or since.
|
# ? Nov 2, 2013 23:42 |
|
Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well), but it's still pretty much the standard choice due to that it's the most straightforward way to modularize a build system and the problems often don't matter.
|
# ? Nov 3, 2013 00:19 |
|
Make really is the worst build system except for all the other systems that have been tried.
|
# ? Nov 3, 2013 00:49 |
|
Jabor posted:Make really is the worst build system except for all the other systems that have been tried. Is this unique to C++, cause I don't see Maven being that bad.
|
# ? Nov 3, 2013 01:56 |
|
For my python shop, we use paver to kick off an rpm build: http://pythonhosted.org/Paver/ Does all the intermediate steps and miscellaneous tasks I usually expect from a build system. Recommended.
|
# ? Nov 3, 2013 02:49 |
|
GrumpyDoctor posted:Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content? Programmers just generally like to poo poo on web programmers . Websockets are okay, and as much as they're a "reinvention" of sockets (they aren't really), the differences between them and the traditional TCP socket abstraction are necessary evils in a browser.
|
# ? Nov 3, 2013 02:58 |
|
Plorkyeran posted:Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well), but it's still pretty much the standard choice due to that it's the most straightforward way to modularize a build system and the problems often don't matter. Some guy wrote up an interesting thing on recursive make which is kind of inevitably title recursive make considered harmful.
|
# ? Nov 3, 2013 06:26 |
|
Fullets posted:Some guy wrote up an interesting thing on recursive make which is kind of inevitably title recursive make considered harmful. Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already.
|
# ? Nov 3, 2013 15:40 |
|
Volmarias posted:Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already. Pretty sure that exists.
|
# ? Nov 3, 2013 15:49 |
|
Volmarias posted:Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already. http://meyerweb.com/eric/comment/chech.html
|
# ? Nov 3, 2013 16:02 |
|
quote:Frank Rubin published a criticism of Dijkstra's letter in the March 1987 CACM where it appeared under the title 'GOTO Considered Harmful' Considered Harmful.[7] The May 1987 CACM printed further replies, both for and against, under the title '"GOTO Considered Harmful" Considered Harmful' Considered Harmful?.[8] Dijkstra's own response to this controversy was titled On a Somewhat Disappointing Correspondence.[9]
|
# ? Nov 3, 2013 16:37 |
|
Interestingly enough, Dijkstra didn't actually title the letters, the CACM editor did.
|
# ? Nov 3, 2013 16:48 |
|
Plorkyeran posted:Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well)
|
# ? Nov 3, 2013 22:38 |
|
Jabor posted:Make really is the worst build system except for all the other systems that have been tried. MSBuild isn't too bad in the .NET world. It can still suck if you overextend it and try to deploy software using build tasks, but actually compiling software is pretty painless.
|
# ? Nov 3, 2013 22:49 |
|
At my last job, one of my duties was to write software that integrated data from DNA sequencers into our own data pipeline. Some of this meant having to deal with the sequencing machine vendor's software. We would always want to make sure that the vendor's software was run in a sensible and reproducible manner because that's how you do respectable science. The sequencer was basically a computer controlled microscope with some extremely high precision motors, and some microfluidics controls to send reagents across the slide at the right time. You'd turn it on, it would run for a week to ten days, then you have terabytes worth of images that need being crammed into the vendor's analysis pipeline, and that would poop out a DNA sequence in text files. Their software would basically have to process the images to find clusters of spots on them, calculate intensities, then do a whole bunch of analysis to convert intensity readings to DNA sequence. There were all kinds of dependent steps in this process; some could be run in parallel, some couldn't. So you would run some script of theirs with a bunch of horribly arcane command line switches. It would set up a project directory containing the images, and then a whole bunch of nested directories under that. Each one got a Makefile to describe whatever analysis process would get invoked at that time. Then you'd just run "make" and in theory you'd be off to the races. It was at the same time something that was pretty clever, and also downright evil. I mean, the vendor script that teed up the whole thing was even called "goat_pipeline.py."
|
# ? Nov 3, 2013 23:45 |
|
One of my coworkers just failed-over our MS-SQL cluster in a panic because "MEMORY USAGE HAS BEEN PEGGED AT 12.7 GIGS FOR THE LAST 7 DAYS!!!"
|
# ? Nov 4, 2013 17:50 |
|
I'm working on a low-level C project that's mostly Linux, but now there is also a sizable Windows portion. I have an issue with "standards" for function return values: if it returns int (usually Linux), 0 is success 99% of the time. On Windows, functions (especially Windows APIs) usually return BOOL and of course then 0 is failure. There are also Windows functions that return Windows status codes and then 0 is success again.
|
# ? Nov 4, 2013 17:55 |
|
Dietrich posted:One of my coworkers just failed-over our MS-SQL cluster in a panic because "MEMORY USAGE HAS BEEN PEGGED AT 12.7 GIGS FOR THE LAST 7 DAYS!!!" Why in the world did he have the ability to do that? I mean, obviously he knows a ton about databases...
|
# ? Nov 4, 2013 18:19 |
|
necrotic posted:Why in the world did he have the ability to do that? I mean, obviously he knows a ton about databases... The only people with the ability to do that are our network admins, who also know very little about SQL server. He told them to do it.
|
# ? Nov 4, 2013 18:26 |
|
|
# ? Jun 5, 2024 03:11 |
|
Jabor posted:Make really is the worst build system except for all the other systems that have been tried. I really like scons for the combo of being a joy to write and scaling terribly.
|
# ? Nov 4, 2013 18:39 |