Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Fuck them
Jan 21, 2011

and their bullshit
:yotj:
I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now :toot:

Let's party!

Adbot
ADBOT LOVES YOU

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



kitten smoothie posted:

Please make that thread.

I've always wondered about the software that drives things like medical accelerators and LASIK lasers, as to how that stuff is developed and vetted before it goes throwing potentially murderous or blinding energy at a human patient.

I once heard a greybeard story about how they accidentally burned a hole through a cinder block wall by underflowing the power output value on the controller of a LASIK machine during testing. It shut off because it blew the breakers in the building. Think happy thoughts if you get your eyes fixed :shepface:

EntranceJew
Nov 5, 2009

2banks1swap.avi posted:

I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now :toot:

Let's party!

I would love to do that at my workplace just to show how much dead code we have / utility things that nobody knows about.

It's only that it would take three years in our current system.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

2banks1swap.avi posted:

I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now :toot:

Let's party!

You should respectfully request that your team start using proper project management tools. If you had tasks with changesets tied to them, you'd be able to find out what's not done very easily, and what code originated from what task.

Dren
Jan 5, 2001

Pillbug

2banks1swap.avi posted:

I'm up to my eyeballs in "go through the entire project and find out what little helper functions we've started aren't finished and what web services are in what state of being (not)finished and if there's even a db for them to read from and document it" right now :toot:

Let's party!

Sounds like you get to delete code! That's my favorite thing!

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
We had some busfactor going on - the guy who was doing a lot of these little helpers and web services did it semi-autonomously because he worked remote a lot and came in sporadically to his health. He has since had to leave indefinitely. :(

We also in the same time frame had our team lead leave.

What I'm trying to say is that I'll be the one doing this most likely. I'll have been a developer a whole year this month. Woo responsibility!

Fuck them
Jan 21, 2011

and their bullshit
:yotj:

Dren posted:

Sounds like you get to delete code! That's my favorite thing!

HAHAHA NO

Kallikrates
Jul 7, 2002
Pro Lurker
Bet there aren't tests to prove you don't break something important by removing some "dead" code.

kitten smoothie
Dec 29, 2001

Munkeymon posted:

I once heard a greybeard story about how they accidentally burned a hole through a cinder block wall by underflowing the power output value on the controller of a LASIK machine during testing. It shut off because it blew the breakers in the building. Think happy thoughts if you get your eyes fixed :shepface:

Oh, believe me, the thought going through my mind as the laser powered up and I could smell my corneas burning off was that I hoped to high heaven the people who wrote the control software were good developers and were paid enough to care.

Fuck them
Jan 21, 2011

and their bullshit
:yotj:

Kallikrates posted:

Bet there aren't tests to prove you don't break something important by removing some "dead" code.

Tests? What are those? We don't have time to write tests!

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off

Suspicious Dish posted:

I sort of wonder if other engineering disciplines fight the same challenges: a mix of carelessness, bad engineers, and tight budgets mean that a ceiling falls somewhere.

I can't imagine this stuff is only applicable to software.

http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse

Cheekio posted:

You know, I'd think that a huge multinational like Toyota would have its act together, but again I am confronted with the folly of man.

That makes me wonder about the whole lot of it. Do contractors for the military end up with spaghetti code running their blackhawks? What about the NSA? Is it spaghetti all the way down?

http://en.wikipedia.org/wiki/Patriot_missile#Failure_at_Dhahran

http://www.defenseindustrydaily.com/f22-squadron-shot-down-by-the-international-date-line-03087/

Everything is terrible.

FlapYoJacks
Feb 12, 2009

Ender.uNF posted:

Did you even bother to read my post? Not to get all Linus here but you're a fool. They didnt just fail at embedded engineering 101; they failed abysmally. They spent time and effort to do the opposite of what you should do.

I mean, every single embedded system ever has a watchdog timer system. That is, quite literally, Baby's First Embedded System. The function of the simplest, dumbest watch dog is to verify that all required tasks are running and restart any that have failed. A marginally smarter one will also catch tasks that run too often or not enough and kill a lower-priority task that eats up too much CPU time but lets not get too fancy here.

Toyota shipped Camrys (and other models) in 2005, 2006, 2007, 2008, 2009, and 2010 (maybe more) using almost the exact same code that had a basically non-functional watchdog. Almost every single task in the entire ECU could poo poo itself and the watchdog would keep going "ALL SYSTEMS GO, FULL STEAM AHEAD!".

We are talking a basic function that was shared across millions of cars that wouldn't take anything more than a peer review and a week of one coders time to fix. Instead any stack overflow, race condition, pointer dereferencing bug, cosmic ray, et al can disable almost all the car's failsafes and/or trigger unintended acceleration. Or just randomly tilt the driver's side mirror. No one knows and there won't be any logs or diagnostic codes written, the ECU will just randomly start doing or not doing... well... something!

Not that Toyota would know this, as they never tested the software or bothered to look for the non-existent logs anyway.

Edit: if I were an insurance company, I'd refuse to cover these Toyota vehicles until Toyota brought in outside programmers to help train their people, review the code, and implement some better processes. Instead Toyota seems to be pulling a Tobacco company / American car company in the 70s "nothing to see here, move along, it was all driver error, what's fault injection?"

Worse than that, the watchdog was serviced by a hardware generated interrupt. :allears:

The Insect Court
Nov 22, 2012

by FactsAreUseless

Suspicious Dish posted:

Sure, abstractions are leaky and if you're implementing any serious realtime system you need to understand the congestion/latency tradeoff. But that's something you can really only can be taught once you already understand TCP.

Building your own packet protocol and bidirectional stream-of-bytes transport that opens a new listening socket on every message on top of TCP is crazy. TCP gives you exactly that. They clearly don't understand TCP well enough to poke start to under it.

That's why I feel like ever mentioning the word "packet" to people beginning to learn basic network programming is a serious disservice. Poke at that later, but for now, it's like a file that you can read from and write to and it magically pops out on the other side!

Personally, I think the best way to teach network programming is to start with the network stack(either OSI or TCI/IP model). Nothing too in-depth, just a electrons->bits->frames->packets->streams sort of overview. A basic knowledge of lower level tech is often useful to keep them grounded conceptually, I do most of my coding in Python/Scala/Javascript/skinny jeans, pretty far from the metal but lower level concerns tend to reach up through the layers of abstraction and by having a basic idea of how the machine you're programming works, you'll have a context to understand things like interrupts and bit-twiddling and cache locality

tef
May 30, 2004

-> some l-system crap ->

Suspicious Dish posted:

Building your own packet protocol and bidirectional stream-of-bytes transport that opens a new listening socket on every message on top of TCP is crazy. TCP gives you exactly that. They clearly don't understand TCP well enough to poke start to under it.

websocket :3:

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Thank you. This is awesome is reading.

raminasi
Jan 25, 2005

a last drink with no ice
Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content?

Baby Nanny
Jan 4, 2007
ftw m8.

GrumpyDoctor posted:

Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content?

You can always use server sent events which basically long polls a specific URL and shits back content to your front end whenever you want it to.

http://en.wikipedia.org/wiki/Server-sent_events

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.

GrumpyDoctor posted:

Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content?

Flash socket
Long polling
Multipart streaming
Forever iframe
JSONP polling

(thanks socket.io page!)

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
WebSockets are the best way. It's not great, but everything else sucks more.

My Rhythmic Crotch
Jan 13, 2011

kitten smoothie posted:

Please make that thread.

I've always wondered about the software that drives things like medical accelerators and LASIK lasers, as to how that stuff is developed and vetted before it goes throwing potentially murderous or blinding energy at a human patient.

Otto Skorzeny posted:

What agency approvals did you guys have to get?

We had to get FDA approvals to certify everything as medical devices, and we usually had to get local regulatory permits for making radiation.

Sorry for the megapost. This is related to my experience working on medical particle accelerators for treating cancer.

Part 1: The Legacy System
The legacy system was really hard to maintain. It consisted of about 20 different applications that were built using an ancient "toolkit" which is no longer supported, and we only had the binaries for that toolkit. These 20 or so applications all communicated over a primitive TCP messaging system. These apps were mostly written in C and ran only on HP Unix (thanks, ancient toolkit). Some parts of the system were "required by law" to be done "in realtime" (at least that was the interpretation, anyway). So their solution was to use an ancient version of vxWorks for the safety-critical stuff. It was so old that it had no memory protection - bugs in one task could manifest themselves as erratic behavior in a different task.

Additionally, the toolkit was not available for vxWorks, so they had written their own TCP system to get data back and forth between the vxWorks machines and the Unix messaging system. Adding new message types was a predictable source of problems, and would routinely break all or part of the messaging system.

The legacy codebase made very interesting use of Makefiles. Each directory had a Makefile inside it - something I have never seen before or since. So it might look something like this:
PROTON_SRC/
├── Makefile
└── src
├── app_1
│   ├── component_1
│   │   ├── class_1
│   │   │   ├── class_1.cpp
│   │   │   ├── class_1.h
│   │   │   └── Makefile
│   │   ├── class_2
│   │   │   ├── class_2.cpp
│   │   │   ├── class_2.h
│   │   │   └── Makefile
│   │   └── Makefile
│   ├── component_2
│   │   ├── component_a
│   │   │   ├── class_a.cpp
│   │   │   ├── class_a.h
│   │   │   └── Makefile
│   │   ├── component_b
│   │   │   ├── class_b.cpp
│   │   │   ├── class_b.h
│   │   │   └── Makefile
│   │   └── Makefile
│   └── Makefile
├── app_2 ...
├── app_3 ...
└── Makefile

Building the code was insane. You had to properly set bunches of environment variables, and no one had bothered to write a script to check your environment automatically (I eventually did). You could not build one app from it's directory, you could only build from the very top directory - probably due to relative paths in the Makefiles.

The company had been installing about 1-2 of these big, expensive proton treatment centers per year since about 2000. So we were getting to the point where we had about 10 different centers. The problem was that there was not binary compatibility between the centers. Feature configuration was handled with #IFDEFs and #DEFINEs sprinkled through the codebase, which necessitated that each site add their own configuration through even more #IFDEFs, and build their own binaries. Worse still, some apps (the vxWorks ones) needed to be able to run with different configurations within each center. That configuration was all done with #IFDEFs, #DEFINEs and Makefile goofiness as well, so each vxWorks app had to be built three or four different times.

For revision control, we used something called Clearcase. It's a very complex system that I actually kind of grew to like. It provided a sort of virtual filesystem, and it's own set of build tools. Instead of the familiar 'make && make install' that you'd find on a normal Linux system, you'd do something like this:
code:
clearmake CONFIG=something OTHER_CONFIG=something_else build_all
(wait 30 minutes, pray the build finishes)
clearmake CONFIG_DERP=dingleberry CONFIG_RUNTIME=production install
Grepping the output of the build was hilarious. Each build produced a couple hundred warnings. There was no system to automatically build the latest code, mostly due to the complexity of managing code in Clearcase.

Improving things was made really difficult by the lack of a defined development process in the company. In Clearcase, you have the ability to create a real loving mess really quickly, and that's exactly what happened. So many branches were made, that changing even a single line of code was usually pretty painful. This image is one that I grabbed off Google of Clearcase:

It basically shows the branching and merging of one single file. It was really common for us to have hundreds of branches and a thousand or so merges on a single file. So just multiply that image by about a hundred to get an idea of the clusterfuck that was present. For one single file. This complexity also made it essentially impossible to have any kind of automated nightly builds, because each site had such a radically different configuration of what branches they were using (Clearcase lets you use as many different branches in a single build as you want), and also because our developers would rename or delete branches, mix features between branches, introduce dependencies in one branch that were only satisfied in another branch, etc etc etc.

Once you had built and installed the codebase, it was time to launch it. And what mechanism do you suppose was used for stopping and starting all those precious applications? That's right, more Makefiles! Essentially it all boiled down to:
code:
make start_app1 start_app2 start_app3
And to shutdown:
code:
make stop_app1 stop_app2 stop_app3
Which brings us to ... testing. The logistics of these treatment centers made testing nearly impossible. Contracts stipulated that the customer had full access to the center for something like 18 hours a day, 6 days a week. And during that time, only production code was allowed. So that left us with basically nights and weekends for testing software. The cyclotron is a fickle beast, and extracting beam during those small windows of time (I'm just a software guy, not a cyclotron expert) can be really loving tricky. So some features would get tested for maybe only an hour or two at one center before being given the final blessing and put into production.

So, just to be clear, the radiation centers were owned by a university, hospital, or big healthcare company, and I worked for the company that manufactured everything. I was basically there to install, test and maintain "the software stuff". I became buddies with one of the physicists who worked for our customer, and one night he tracked me down. He was holding a sheet of special plastic that turns black in the areas where it's been exposed to radiation. He was panicked.

I looked at the sheet for a couple seconds. Something was wrong with it. There was a gap that had not turned black, when I knew that it should have. I could feel the hair stand up on my neck, and my skin flush.

"How... how did you make that?" I squeaked.
"I just ran a [regular irradiation] and..."
"Did you treat anyone like that?" I stammered.
"I don't know..."

It turns out they didn't kill anybody. But what happened is that the legacy system had transposed X and Y in the final beam scanning. Normally, the X and Y dimensions are about equal for the treatment plan, so you'd never notice.

I went on to install another center in the Czech Republic, which took about two years. During that time, a new software architecture was introduced to replace the legacy system. After finishing the installation of that center, I left the company, and I'm back in the US now. But that's for another post.

My Rhythmic Crotch fucked around with this message at 21:02 on Nov 2, 2013

Presto
Nov 22, 2002

Keep calm and Harry on.

My Rhythmic Crotch posted:

Each directory had a Makefile inside it - something I have never seen before or since.
It's a recursive make system. We have the same thing at my job.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well), but it's still pretty much the standard choice due to that it's the most straightforward way to modularize a build system and the problems often don't matter.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Make really is the worst build system except for all the other systems that have been tried.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

Jabor posted:

Make really is the worst build system except for all the other systems that have been tried.

Is this unique to C++, cause I don't see Maven being that bad.

Lurchington
Jan 2, 2003

Forums Dragoon
For my python shop, we use paver to kick off an rpm build: http://pythonhosted.org/Paver/
Does all the intermediate steps and miscellaneous tasks I usually expect from a build system. Recommended.

Deus Rex
Mar 5, 2005

GrumpyDoctor posted:

Ok, I see people making fun of websockets a lot, but as someone with literally no web programming experience, what's a better way to do live browser content?

Programmers just generally like to poo poo on web programmers . Websockets are okay, and as much as they're a "reinvention" of sockets (they aren't really), the differences between them and the traditional TCP socket abstraction are necessary evils in a browser.

Fullets
Feb 5, 2009

Plorkyeran posted:

Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well), but it's still pretty much the standard choice due to that it's the most straightforward way to modularize a build system and the problems often don't matter.

Some guy wrote up an interesting thing on recursive make which is kind of inevitably title recursive make considered harmful.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Fullets posted:

Some guy wrote up an interesting thing on recursive make which is kind of inevitably title recursive make considered harmful.

Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already.

OddObserver
Apr 3, 2009

Volmarias posted:

Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already.

Pretty sure that exists.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Volmarias posted:

Eagerly awaiting a ""considered harmful" papers considered harmful" meta paper that lets us just get a new meme already.

http://meyerweb.com/eric/comment/chech.html

Opinion Haver
Apr 9, 2007

quote:

Frank Rubin published a criticism of Dijkstra's letter in the March 1987 CACM where it appeared under the title 'GOTO Considered Harmful' Considered Harmful.[7] The May 1987 CACM printed further replies, both for and against, under the title '"GOTO Considered Harmful" Considered Harmful' Considered Harmful?.[8] Dijkstra's own response to this controversy was titled On a Somewhat Disappointing Correspondence.[9]

BigRedDot
Mar 6, 2008

Interestingly enough, Dijkstra didn't actually title the letters, the CACM editor did.

Presto
Nov 22, 2002

Keep calm and Harry on.

Plorkyeran posted:

Recursive make is kind of terrible (it has a lot of overhead, doesn't track dependencies very well, and doesn't parallelize very well)
I had ours parallelized to the point that if I left off the number following the -j option for 'make' it would lock up the OS from trying to spawn a few thousand compiler processes. The fact that this is possible is kind of a horror in itself.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Jabor posted:

Make really is the worst build system except for all the other systems that have been tried.

MSBuild isn't too bad in the .NET world. It can still suck if you overextend it and try to deploy software using build tasks, but actually compiling software is pretty painless.

kitten smoothie
Dec 29, 2001

At my last job, one of my duties was to write software that integrated data from DNA sequencers into our own data pipeline. Some of this meant having to deal with the sequencing machine vendor's software. We would always want to make sure that the vendor's software was run in a sensible and reproducible manner because that's how you do respectable science.

The sequencer was basically a computer controlled microscope with some extremely high precision motors, and some microfluidics controls to send reagents across the slide at the right time. You'd turn it on, it would run for a week to ten days, then you have terabytes worth of images that need being crammed into the vendor's analysis pipeline, and that would poop out a DNA sequence in text files.

Their software would basically have to process the images to find clusters of spots on them, calculate intensities, then do a whole bunch of analysis to convert intensity readings to DNA sequence. There were all kinds of dependent steps in this process; some could be run in parallel, some couldn't. So you would run some script of theirs with a bunch of horribly arcane command line switches. It would set up a project directory containing the images, and then a whole bunch of nested directories under that. Each one got a Makefile to describe whatever analysis process would get invoked at that time.

Then you'd just run "make" and in theory you'd be off to the races. It was at the same time something that was pretty clever, and also downright evil. I mean, the vendor script that teed up the whole thing was even called "goat_pipeline.py."

Dietrich
Sep 11, 2001

One of my coworkers just failed-over our MS-SQL cluster in a panic because :supaburn: "MEMORY USAGE HAS BEEN PEGGED AT 12.7 GIGS FOR THE LAST 7 DAYS!!!" :supaburn:

omeg
Sep 3, 2012

I'm working on a low-level C project that's mostly Linux, but now there is also a sizable Windows portion. I have an issue with "standards" for function return values: if it returns int (usually Linux), 0 is success 99% of the time. On Windows, functions (especially Windows APIs) usually return BOOL and of course then 0 is failure. There are also Windows functions that return Windows status codes and then 0 is success again. :negative:

necrotic
Aug 2, 2005
I owe my brother big time for this!

Dietrich posted:

One of my coworkers just failed-over our MS-SQL cluster in a panic because :supaburn: "MEMORY USAGE HAS BEEN PEGGED AT 12.7 GIGS FOR THE LAST 7 DAYS!!!" :supaburn:

Why in the world did he have the ability to do that? I mean, obviously he knows a ton about databases...

Dietrich
Sep 11, 2001

necrotic posted:

Why in the world did he have the ability to do that? I mean, obviously he knows a ton about databases...

The only people with the ability to do that are our network admins, who also know very little about SQL server. He told them to do it.

Adbot
ADBOT LOVES YOU

Posting Principle
Dec 10, 2011

by Ralp

Jabor posted:

Make really is the worst build system except for all the other systems that have been tried.

I really like scons for the combo of being a joy to write and scaling terribly.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply