|
nielsm posted:Perhaps part of the problem is that all the warnings are posted at the person operating the system at the moment. One part of a solution could be to still detect unusual medication amounts, but instead of requiring the present operator to sign it off, require a third party to review the order.
|
# ? May 29, 2017 18:46 |
|
|
# ? Jun 5, 2024 06:40 |
|
nielsm posted:Perhaps part of the problem is that all the warnings are posted at the person operating the system at the moment. One part of a solution could be to still detect unusual medication amounts, but instead of requiring the present operator to sign it off, require a third party to review the order. Isn't that the role that the pharmacist was supposed to be fulfilling? The second person in the chain to not notice the error, in the narrative.
|
# ? May 29, 2017 19:41 |
|
The solution is something he talks about on the last page - the mode change from from mg to mg/kg has to be clear, and the warnings need to be more carefully tiered, so that "There isn't a pill of this dosage, please adjust", "This is getting close to maximum recommended dosage", and "This is 40x the lethal dose, idiot" look completely different (and the last one needs to be harder to dismiss).
|
# ? May 29, 2017 20:03 |
|
Dr. Stab posted:Yeah, I'm not trying to speak of "blame" in terms of who to punish. I'm talking about blame in terms of where the error came from and who is capable of fixing it. If the bad specification, rather than negligence by programmers, is the cause of the problems with the software, that still doesn't change the fact that the software has problems, and those problems need to be fixed. Yeah, that's what I'm talking about as well. I'm saying that it's likely that the developers would probably have tried to get a lot of these things changed and gotten blocked by the guidelines/client who hired them. Eventually after enough refusals to change these, they'd stop trying so hard. Even more importantly, I wouldn't be surprised if they didn't have the domain knowledge to know what's important and that the person who coordinated with them wasn't being useful in this regard. Basically, I've seen so many non-critical-domain projects get designed with even bigger flaws like this, because the client demanded it/the client didn't explain what's important about it/the client didn't offer any or enough feedback to know how people actually use the software. Add in the tons of regulations medical things have to follow and suddenly you've got a recipe for design hell if you don't have someone on board who knows exactly what the software needs to do and is willing/able to put in the time to help (which no project ever does).
|
# ? May 29, 2017 20:11 |
|
Snak posted:It was very scary how many of my classmates in software engineering though that it was bullshit and that our professor "didn't know what he was talking about" when were were required to have a design document before coding our project. They really all just wanted to wing it. The problem with this is that most profs want to teach SE and coding at the same time. It's a good idea in theory. But in practice you need to be a fairly good coder before you can think on the scale where SE matters and too many students I've met have had to do silly things like design documents or (ugh) stepwise refinements of Hello World.
|
# ? May 29, 2017 20:56 |
|
I mean, this was a 4000 level course, so it was basically all juniors and seniors.
|
# ? May 29, 2017 21:01 |
|
Snak posted:If most software is really written by each coder interpreting what they think the customer wants and then implementing it however they see fit without communicating it to any other members of the team working on other parts of the project, that would explain a lot about the current state of software... Well sometimes you have a spec that you mostly just ignore because it's a pile of gibberish.
|
# ? May 29, 2017 21:03 |
|
I think good SE practices are hard to teach because they don't scale downward. Not well. The kinds of programs you can code in a single semester are pretty toy, and you don't really see the benefits of proper SE at a small level. I found the same problems learning Gang of Four OO programming. Until you have problems with more than ~120 objects it just a seems like over-complicated architecture.
|
# ? May 29, 2017 21:09 |
|
lifg posted:I think good SE practices are hard to teach because they don't scale downward. Not well. The kinds of programs you can code in a single semester are pretty toy, and you don't really see the benefits of proper SE at a small level. Yeah, really the best way to learn the importance of these is to work on a project of that magnitude and realize how bad your intuitive solutions are.
|
# ? May 29, 2017 21:12 |
|
Absurd Alhazred posted:Yeah, really the best way to learn the importance of these is to work on a project of that magnitude and realize how bad your intuitive solutions are. Would it violate some ethical rules to have students make contributions to a large open-source project of some kind as a finals assignment?
|
# ? May 29, 2017 21:45 |
|
NihilCredo posted:Would it violate some ethical rules to have students make contributions to a large open-source project of some kind as a finals assignment? I don't think so. Here's a paper encouraging this practice.
|
# ? May 29, 2017 21:59 |
|
There were people in my class who would push code to master without checking if it even compiled.
|
# ? May 29, 2017 22:07 |
|
Snak posted:There were people in my class who would push code to master without checking if it even compiled. Should lose a letter grade every time they do that.
|
# ? May 29, 2017 22:22 |
|
Snak posted:There were people in my class who would push code to master without checking if it even compiled. I'm surprised that people in classes (that isn't a git class) managed to push to master.
|
# ? May 29, 2017 22:23 |
|
NihilCredo posted:Would it violate some ethical rules to have students make contributions to a large open-source project of some kind as a finals assignment? Yes, think of the poor open source maintainers.
|
# ? May 29, 2017 23:20 |
|
Snak posted:There were people in my class who would push code to master without checking if it even compiled. If you don't have continuous integration that rejects changes that don't compile, this will happen all the time, even with relatively conscientious developers and strict code review.
|
# ? May 30, 2017 00:25 |
|
Snak posted:In this analogy, the bridge's weight limit was clearly posted, but there were so many signs along the side of the road that drivers stopped reading then. BDUF is in general a pretty bad way to run projects. lifg posted:I think good SE practices are hard to teach because they don't scale downward. Not well. The kinds of programs you can code in a single semester are pretty toy, and you don't really see the benefits of proper SE at a small level. If you're on smaller projects, it _is_ over complicating the architecture.
|
# ? May 30, 2017 00:57 |
|
lifg posted:I think good SE practices are hard to teach because they don't scale downward. Not well. The kinds of programs you can code in a single semester are pretty toy, and you don't really see the benefits of proper SE at a small level. That's why instead of teams of 4 (or whatever) the entire class should be run like a single engineering team building one project with the prof as CTO. Thirty people in 12 sprints allows you to build very much a non-toy project.
|
# ? May 30, 2017 01:24 |
|
Something that can be built by students in a single semester of part-time work isn't exactly a toy, but you still aren't going to hit a lot of the pain-points of real world software development. I really don't think there's any reasonable classroom replacement for a good internship.
|
# ? May 30, 2017 04:50 |
|
Of course there isn't. My point was about what you should learn in a classroom. But I don't really know anything. I went to a state school that could barely be bothered to grade assignments... I only losted initially to say that if coders don't understand the end result they are trying to accomplish, there is no way for them to achieve it on purpose.
|
# ? May 30, 2017 09:45 |
|
Master_Odin posted:Except that in this case, that third-party would be required to review tons of these things and then they'll suffer from fatigue as well and things will slip through. When i was doing some work on a space project certain commands require dual authorisation. What we discovered was that the operators shared their passwords so one person could complete the auth without needing someone else to auth, also it was hard for a person to check the command as all it gave you was a modal popup to auth so they couldnt confirm what the command was without cancelling and then redoing the auth from scratch
|
# ? May 30, 2017 10:04 |
HardDiskD posted:I'm surprised that people in classes (that isn't a git class) managed to push to master. Agreed. I had a class that ostensibly taught us how to use git alongside some other stuff and ended up being everyone just using dropbox because nobody understood how to handle a conflict (that came up every other push because we were all working in 2-3 classes).
|
|
# ? May 30, 2017 13:42 |
|
TheresaJayne posted:When i was doing some work on a space project certain commands require dual authorisation. Anything that relies on two people to check each other will be bullshat just the same as if it required one person to check themselves. Laziness always wins out if it can.
|
# ? May 30, 2017 14:24 |
lifg posted:I think good SE practices are hard to teach because they don't scale downward. Not well. The kinds of programs you can code in a single semester are pretty toy, and you don't really see the benefits of proper SE at a small level. This is so true. I've been coding for about 2 years, and my very first semester of college we had a course on requirements analysis, which was all about creating various UML diagrams. The students had no prior experience coding so none of the diagrams or concepts made any sense to us. The prof droned on and on about inheritance, polymorphic behaviors, and showed us seemingly endless amounts of diagrams (2-3 different types every lecture). The students had no context for how any of these concepts could be applied, and as a result it was pretty useless. Similar thing happened in second semester, when they taught us design patterns and data structures combined into a single 1 semester course. Design patterns seemed to just over-complicate everything and the teacher was very poor at explaining their actual real-world application. One of his favorites was the Singleton pattern which is pretty widely criticized as being an antipattern. I think he just liked it 'cuz its easy to remember how to code them A while ago a bunch of design patterns finally clicked for me, and I realized that they are mostly just taking advantage of OOP concepts, especially polymorphism. From a beginner's perspective, it seems like a ton of design patterns are just about using interfaces rather than implementations whenever possible, and keeping things as generic as possible. They seem useful for large applications but could easily be overkill. I'm still a bit lost as far as the purpose of a factory, especially an abstract factory. I believe it's to help "the client" instantiate objects but abstract away the logic for choosing which type of object to create. Does that sound about right? What's the real world application? Is it better performance than just instantiating objects? What's the point of always dealing with interfaces rather than specific objects? Didn't even touch Big O, complexity analysis or optimization in this program. Any tips on where to start on this topic?
|
|
# ? May 30, 2017 15:23 |
|
Snak posted:There were people in my class who would push code to master without checking if it even compiled. That's good experience for the workplace.
|
# ? May 30, 2017 15:31 |
|
That was the professor's logic yes. But the end result of everyone's first and often only contact with version control being a frustrating mess that they didn't understand was that most people hate it and didn't see any benefit to it. I feel like lessons that showed how vcs could be useful would have been better...
|
# ? May 30, 2017 15:34 |
|
Pollyanna posted:Anything that relies on two people to check each other will be bullshat just the same as if it required one person to check themselves. Laziness always wins out if it can. It's not even necessarily laziness, in fact it could be the opposite. A lazy user might put up with a burdensome procedure because if it slows down work, that's not his problem and he's more than happy to sit around waiting for the proper authorisation to roll in. Even if everything burns down in the meanwhile, hey, his rear end is covered. A well-meaning user might actually care about his work and be more worried about not being able to intervene in an emergency. So if a security measure could slow him down and he doesn't think it's truly necessary, he'll try to get it out of the way. The most famous example is probably the U.S. armed forces setting their launch codes at 00000000.
|
# ? May 30, 2017 16:16 |
|
Sometimes I'll try to explain to my parents what I do. I never get the feeling that they really get it. My most recent discussion was around just this subject...UI/UX and guarding against human behavior. They're smart people, but I don't think they ever really got it. It kind of gives me an idea of one of the obstacles in creating large systems with multiple parties involved in decision-making. A good fraction of the parties involved just aren't going to get it.
|
# ? May 30, 2017 16:21 |
|
Ornithology posted:This is so true. I've been coding for about 2 years, and my very first semester of college we had a course on requirements analysis Oh my god. I've don't know which is more terrifying. Ornithology posted:A while ago a bunch of design patterns finally clicked for me, and I realized that they are mostly just taking advantage of OOP concepts, especially polymorphism. From a beginner's perspective, it seems like a ton of design patterns are just about using interfaces rather than implementations whenever possible, and keeping things as generic as possible. That's it. It's using the features of objects, like polymorphism, to build systems of objects that adapt easily to changes. Ornithology posted:Didn't even touch Big O, complexity analysis or optimization in this program. Any tips on where to start on this topic? A better school? Big O is an easy way to measure and compare complexity, and you shouldn't optimize an algorithm until you know its complexity. I don't know where to start learning it, but I'd guess Khan Academy.
|
# ? May 30, 2017 17:52 |
|
lifg posted:A better school? Big O is an easy way to measure and compare complexity, and you shouldn't optimize an algorithm until you know its complexity. I don't know where to start learning it, but I'd guess Khan Academy. CS50 does a fantastic job of explaining Big O. I'm not sure which specific lecture it's in but you can probably dig it up if you do a few minutes of digging on YouTube or edX, and there's also this short related to the class as well which was helpful. Luckily it's rather straightforward to understand. https://www.youtube.com/watch?v=IM9sHGlYV5A
|
# ? May 30, 2017 18:04 |
|
Ornithology posted:A while ago a bunch of design patterns finally clicked for me, and I realized that they are mostly just taking advantage of OOP concepts, especially polymorphism. From a beginner's perspective, it seems like a ton of design patterns are just about using interfaces rather than implementations whenever possible, and keeping things as generic as possible. They seem useful for large applications but could easily be overkill. I'm still a bit lost as far as the purpose of a factory, especially an abstract factory. I believe it's to help "the client" instantiate objects but abstract away the logic for choosing which type of object to create. Does that sound about right? What's the real world application? Is it better performance than just instantiating objects? What's the point of always dealing with interfaces rather than specific objects? code:
There's still a problem though. You don't have to care about which implementation you're using, but you still have to provide it. That sucks, the application doesn't care about that, that's probably something the user will wanna configure themselves. That's where factories come in: code:
Patterns aren't "just for large applications". They may seem excessive when the code is small, but they're useful everytime you don't have (or want) full control over everything (third-parties, architectural layers, black boxes, etc). The key decision is deciding what you need to abstract. Everything has trade-offs. Large applications love not thinking about this at all, abusing the hell out of patterns by just abstracting everything away to the point it's actually more complex to deal with than concrete implementations.
|
# ? May 30, 2017 18:26 |
|
The hardest part of implementing design patterns is knowing when to stop.
|
# ? May 30, 2017 19:07 |
|
Not a horror but made me laugh.code:
|
# ? May 30, 2017 19:09 |
|
Working with an outside dev team for a small Marketing site where they're handling the site design and WordPress (yes, I loving know) theme customization. We were running that site for the longest time on a single server with no dev or test environment. It wasn't mission critical and we're a small company and most our leads come from another source. Eventually we decided to try on our big boy pants and move to a server host that doesn't suck balls and have random outages several times a year and I setup a load balanced configuration with failover for the site and backend DB and dev/test server then told the web devs they'd have to make their changes in dev and let me know when stuff was ready to deploy to production which is, of course, the bare minimum everyone should be doing anyway. (I wasn't involved in the initial server setup at the old host) These jackass will not stop loving whining about not having direct file system access to our production servers. Maybe if they'd never made a mistake with their changes I could seed why some inexperienced devs would think it's unnecessary red tape, but I to frequently have them fix their poo poo because they make changes without even doing minimal testing/checking of the results. The most glaring one recently was a search button they needed to fix and their style changes hosed up the position and padding in an obvious way. Their excuse was that it was an IE problem. Nope, every browser looked the same and that is the only thing you were working on for the fix, come on. I've also had to go through and redo their work for really basic poo poo, like one of their custom themes tracks some stats in the DB in a custom table and their code tries to recreate that db every single time the page is loaded. It also doesn't even check to see if the URL params are null before trying to insert a row of all null values (in non-nullable columns). WordPress being as awesome as it is was hiding the exceptions and things "worked". I only noticed this by chance I was looking at other WordPress crimes against computing trying to figure out some performance issues. Occasionally we ask them to tweak something or fix an issue and instead of doing that on the dev server, testing it and letting me know they tell me to edit file xyz and change line # to such and such. Just no. You tell me when you've done that the right way, we're paying you so I don't have to screw with PHP or Wordpress and the time you spent typing up the instructions and emailing me was probably longer than it would've taken to just do it the right way. There's also almost a 100% chance they're not using any source control system at all. Still don't understand the whining about no prod access when they're not working with the prod site or prod content at all. Why is it such an ordeal to email me when it's time to push to prod? JFC. I'm going to try and convince the owner (who selected a tiny local web dev company) to switch but then I'll end up having to find their replacement and that will suck. I'm not an admin but I get to handle all the server poo poo because we've been too small to have a full-time admin so far but I think I'm gonna start asking for one. Rackspace does a decent job with the extreme basics of managing the servers but their hands off when it comes to the databases with our current plan. Getting a fully managed database through them costs about half as much as hiring a Linux admin with some MSSQL admin experience full time so I'm going to push for that. The company is at that awkward point where we have enough random server and general IT issues to have those issues take up a significant portion of my time, keeping me from staying focused on dev but it's still not enough to keep someone busy full time. Hoping we can find a good admin wanting to get into dev. ...and that's what I would do if I were the King of all the Universe!
|
# ? May 31, 2017 02:07 |
|
Plorkyeran posted:Something that can be built by students in a single semester of part-time work isn't exactly a toy, but you still aren't going to hit a lot of the pain-points of real world software development. Another issue that I ran into with my SE class (which I had taken after a summer internship): trying to get everyone in the group together to talk about things was like pulling teeth. No one was ever available at the same time. At least in a workplace, everyone is (at least theoretically) paid to work together to make the product, so there's a different set of incentive structures compared to a classroom. Sure, there are analogs for that in workplaces; but it's a lot easier to than to figure out how not to let your group cause you to fail SE. SupSuper posted:Patterns aren't "just for large applications". They may seem excessive when the code is small, but they're useful everytime you don't have (or want) full control over everything (third-parties, architectural layers, black boxes, etc). The key decision is deciding what you need to abstract. Everything has trade-offs. Large applications love not thinking about this at all, abusing the hell out of patterns by just abstracting everything away to the point it's actually more complex to deal with than concrete implementations. Abstracting out parts of your code from one another gives you flexibility if you defined good APIs as part of the abstraction. More often than not, when I see abstractions make things more complex, it's that the API wasn't well-defined and/or it wasn't a full abstraction and all sorts of concrete details end up crossing the barrier, and now you've just added more detours in what's already a spaghetti mess of code. But yes, you can also take it too far. If you can't easily trace what your code is doing, then you failed—and some of this falls in the category of "code organization".
|
# ? May 31, 2017 05:06 |
|
Spatial posted:Not a horror but made me laugh.
|
# ? May 31, 2017 11:35 |
code:
|
|
# ? May 31, 2017 14:58 |
|
Quite a lot. There's an unbelievable amount of poo poo in that header.
|
# ? May 31, 2017 14:58 |
|
Spatial posted:Not a horror but made me laugh. Found the real horror.
|
# ? May 31, 2017 15:01 |
|
|
# ? Jun 5, 2024 06:40 |
|
ratbert90 posted:Found the real horror. what's wrong with that?
|
# ? May 31, 2017 16:56 |