|
Hello thread! Lots of fun stuff I missed out on in the last couple dozen pages, but I've arrived at a good time:My Rhythmic Crotch posted:he's done 3 really good things that no other manager here has: And on that note, I can share a waterfall example: I planned to read the entire thread before posting. I did and am now posting. Success. Buuut don't get too bent out of shape; I'll be joining in lamenting scrumfail, pointless points, inconsistent Fibonacciae, analysis paralysis, refactoring hell, and eternal retrospectives of the unchanging hive mind.
|
# ¿ May 21, 2019 03:01 |
|
|
# ¿ Apr 28, 2024 08:13 |
|
Protocol7 posted:I got a recruiter email with the subject "Alexa, find me a Jedi Master level software engineer".
|
# ¿ May 23, 2019 04:18 |
|
I agree with those two categories basically, but want to emphasize that only one of them will be commonly capable of software architecture and development. In my experience all machine learning scientists will be unable to devise methods to perform any of the support necessary to build a machine learning system, namely: How to get data into the system, how to schedule retraining of the system, how to actively update data used for retraining, how to roll back when a retraining fails, how to provide revision control and configuration of models, how to monitor the behavior of the model, how to scale the model for production use, how to release the model beyond their desktop, how to deploy their model through a pipeline, or how to provide integration testing or regression testing for the model before releasing it into a production environment. I have also seen a fair number of machine learning scientists that can provide no business interpretation for the results, so not only will you need a team of five developers to build the infrastructure, you will also need several analysts familiar with your data set to define the inputs and outputs that the machine learning scientist can use. In all honesty, I have tried over the years to keep my eyes open looking for benefits to ML, but it's very difficult to get a good picture of when it is needed verses when it just happens to work for a particular problem that could be solved more simply and directly. There are certainly a few systems that benefit from it, but in many cases a simple statistical model will be sufficient to move your business forward. In my opinion anyway.
|
# ¿ May 26, 2019 01:21 |
|
Hollow Talk posted:I feel that this overlapping of skills is the really hard problem, because otherwise, ML-people build models that cannot be run, engineers build systems without purpose, and analysts build analyses without technical merits. ML is popular because you can get results almost immediately and it doesn't matter if they're wrong. People don't understand large systems, so they don't have a baseline with which to judge ML programs. Nevertheless they'll sink a few years and a team of people into it if they get 0.01% improvements and "predictive results". Good luck getting them to agree to even four weeks of focused math research, not that they realize when that is needed. It truly is a perfect example of agilefail. Disclaimer: As before, some ML works. The only examples I've seen tend to be in image analysis, where writing a representation by hand would kinda suck. Still I wonder if a simple color histogram would correctly find good strawberries.
|
# ¿ May 27, 2019 02:47 |
|
bob dobbs is dead posted:Goog and bing search I find this first chunk interesting, along with some of the other posts, but I'll start here. Internet search didn't start with machine learning, and when I search for coffee I get indexed results from the pool that answers (which isn't Google anymore because they just screwed up their site). The ML is only on the classification and ranking side; right? That's an improvement over full text indexing, for certain, because of the nature of language. So linguistics, voice recognition, that stuff I believe. I think most quality recognition for fruit and such is a matter of convenience only, and I don't believe it's notably better than deterministic algorithms. The "black box" nature of most ML projects makes it de facto irrefutable. That doesn't make it wrong, just unverifiable. Yes there are silly examples (cancer caused by medical rulers, ref?), but swipe gets it wrong all the time and increasingly so does (did) Google. When I find myself doing the (-b -c -d garrr I'll just use full Boolean!) game, all their work has failed... drastically. Shrug. This much I know, not speaking about anyone here, but there are some posters in SA who can only be understood with ML, if even then.
|
# ¿ May 29, 2019 00:54 |
|
This is getting into awfully technical considerations that likely belong elsewhere. I'll admit to being biased against most "machine learning" (actual or otherwise), but also admit to knowing of a handful of large successes that I'd prefer to continue to use. Maybe when I start going through my backlog of agliefales I'll have a few ml stories to share.
|
# ¿ May 29, 2019 23:02 |
|
shrike82 posted:Does it really matter whether a model is strictly speaking AI, ML, or DL-based? PageRank isn't building a "mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task". It's a designed, deterministic algorithm, that's explicitly programmed to do that one task. Words in this industry are a loving game. Try telling your VP you're going to "build a statistical model to measure and predict X" and you're going to get Tell them "we're doing it with machine learning" and you'll see See also: Buying the gnome. https://www.slidebelts.com/blogs/blog/buying-the-gnome PhantomOfTheCopier fucked around with this message at 16:17 on May 30, 2019 |
# ¿ May 30, 2019 15:00 |
|
"A required system upgrade will start in ten minutes. Please close all open work and... well just go home for the day, this'll take a while".
|
# ¿ Jun 4, 2019 00:41 |
|
Agile is not an excuse to be lazy. You wrote the few paragraphs of an idea into the "story". You were in the room with the active conversations about difficulties uncovered. You heard about our four weeks of ongoing investigation. Why wait until week eight of twelve to tell us there's an existing library to bridge between the old and new systems?
|
# ¿ Jun 5, 2019 15:22 |
|
Technical interview content comes up a lot where I work. For more junior developers, some will use standard programming puzzles, which provide some provisional evidence that the candidate learned anything in college. Someone with additional practice could fail on follow-up when the interviewer extends the discussion and all their practice is likely to be a hindrance: Those candidates typical try to adapt the stated problem to something they've done instead of solving the actual problem. Beyond an entry level, however, I've rarely seen puzzles. Most are real world scenarios that require interpretation, evaluation, and investigation (through q&a), before proposing and developing the solution. Granted, most are cast into a 30min window, and interviewers can give hints to move candidates forward or help them focus. Practice with puzzles might make someone more confident, but again they may get stuck trying to apply a known solution to the wrong problem. One of the reasons these approaches haven't been replaced is because they are merely tools to collect more useful information: How do you solve problems? How do you communicate? Are you willing to share ideas, accept feedback, and admit mistakes? And, yes, for a programmer, can you turn those ideas of yours into some code. Another interesting thing with the scenarios I use, though I haven't really run the numbers, is that candidates fall pretty well within the lines. Maybe I'll gather the data Monday, but it mostly seems like people either can't do anything, can deliver a simple brute force, can generalize to an optimised solution possibly assisted by a language feature, can "build it as a service", or can "build it as a product". I don't recall much blurring in performance. The number of applicants that can't model and/or solve problems with code is rather astounding.
|
# ¿ Jun 8, 2019 00:06 |
|
That's not very agile. I'd never submit myself to that type of torture, particularly since it suggests the company is so unprepared to conduct interviews that they can't even summarize their problem domains into a few 45min scenarios. Meanwhile, given the learning time required, I wouldn't expect much from 1dy; you might as well just post everything as contract-to-hire. I admit that lots of companies are crawling up to the 5hr--6hr mark, or basically the full day with lunch. Honestly, that's getting to be too much because I rarely see disagreements that require all those interviewers' feedback. I mean, does four coding problems give you much more than three? I think it's of greater benefit to extend the time per slot and use more general questions, so you can get a sense for how the candidate develops and modifies their approach when more information is added. My worst was the 6hr interview that became nine and a half. I should have walked out.
|
# ¿ Jun 10, 2019 23:30 |
|
Blinkz0rz posted:I hope you never get hired Comradephate posted:Re: contrived interview problems vs. real-life problems: Not sure what you're trying to say here. A "real world" problem is one where you take a scenario and throw out all the pieces that are meaningless in an interview, modify one or two things to suit constraints, and you're ready. A contrived problem is one where you say, "wtf you just made this up so you'd have something to ask me?" People with "real world experience" in a system can flub if they try to "over-apply" their experience instead of discussing the scenario in the interview. Candidates without that experience have to apply the same basic solving skills. The scenario doesn't give an advantage beyond that naturally afforded to more experienced candidates. On the other hand, a contrived problem may be tackled excitedly by a candidate lacking "real world experience" because they've been riding leetcode. The person with the experience is likely to leave, laugh, or later observe, "Why the gently caress would I want to work there if those are the problems they need me to solve?" It's easy to simplify scenarios if candidates are stuck. It's harder to get good info if you spend your hour scaling up from "build an array" to "build the cloud".
|
# ¿ Jun 11, 2019 22:45 |
|
That's every CI story I've ever heard: You need a team continuously implementing CI tooling to achieve something like CI, but then an event will make you realize you need testing for that tooling, so you need CI for your CI... It's basically Zeno's Paradox for both CI and CD. Startup or corporation, I've never seen it succeed. And when you get really close, management will come in and tell you they want written, manual approval on steps to "avoid an outage".
|
# ¿ Jun 12, 2019 00:48 |
|
shrike82 posted:Is it common for you guys to encounter hires that passed the technical interview process and turned out to be problematic from a technical proficiency standpoint? I personally can't think of many cases like that - most problem hires tend to be duds from a cultural fit, soft skills standpoint. And we wonder why junior engineers always want to replace things with a new hotness. Edit: Phone post, fill in the gaps and don't make silly assumptions. PhantomOfTheCopier fucked around with this message at 23:30 on Jun 13, 2019 |
# ¿ Jun 13, 2019 23:27 |
|
Given how many things break with code reviews, I can't imagine having customers at all (that would stick around for the hourly outages) without them.
|
# ¿ Jun 13, 2019 23:41 |
|
While there are a few good points in that document, I find it ever amusing that each new project establishes the same fundamental failures: "We wanted to be fast, we learned we hosed it all up, we made some choices about replacements versus redesigns versus repairs, please help us try to ingrain these arbitrary guidelines that we just made up". Along the way they routinely (excessively) invoke "as long as it's good enough!!". What I see as the biggest philosophical failure of agile software development is not the "good enough" or "faster than fast!!" desires, but the blatant oppositions to "more than just 'good'" and "fast enough". None of these documents start with "We decided to start with established practices from the most successful software projects, thus differentiate between proofs-of-concept and prototypes and production code, and most value building something that handles 99% accurately and correctly because we don't want to lose customers and peers while they wait around endlessly for us to fix crap libraries, and spending the extra 10-20% on that is worth the time (because it actually takes 3x to get to the same point is we try to take shortcuts). Of course, any development methodology can support whatever time/cost/resource balance that is desirable, but what people actually follow will be the most lazy parts of the method.
|
# ¿ Jun 16, 2019 15:12 |
|
I ride the bus and the number of people in the morning that smell like sweat strongly suggests an over-emphasis on saving water or something.
|
# ¿ Jun 19, 2019 23:22 |
|
Starbucks coffee tastes burnt. Is it any wonder that so many of their consumers have bad coffee breath?
|
# ¿ Jun 20, 2019 23:13 |
|
But that's not enough. We need a Linus versus esr page. Namely, "Is Mrs/Mr Doe Agile, agile, or not?" ... versus Stallman versus Royce versus de Raadt vs Gates...
|
# ¿ Jun 21, 2019 22:51 |
|
And the other problem with agile methodologies rears its ugly head: We shall assume that tasks are performed by interchangeable clods with equivalent institutional knowledge. Too bad that risk planning is "planning" and that's not agile. Really looking forward to the review of this project after it completes. I really hope they're willing to look at how things were ultimately delivered.
|
# ¿ Jun 24, 2019 23:16 |
|
Unfortunately, I'm afraid you all missed it. Points or hours are irrelevant. The false assumption is that engineers are interchangeable during the task, and since there's no plan and no risk plan, things fall apart as a result of the exchange. A 5pt task that has undergone "1pt of work" returns to a 5pt task when the engineer is flopped out for a "higher priority". A 5hr task likewise. By rapidly exchanging people you could probably arrive at Zeno's Paradox of Productivity, that "Assigning everyone to everything yields zero velocity".
|
# ¿ Jun 25, 2019 21:19 |
|
Process doesn't exist without people, and process induces certain behavioral patterns that create these issues. I'm blaming the process, not necessarily for the initial issues created by the people, but for it having a general approach that also ignores "safety mechanisms" that prevent those issues from snowballing into catastrophies.
|
# ¿ Jun 25, 2019 21:35 |
|
Humans aren't resources, they're property. In this grand, new millennium it's called "marriage" for an individual and "employment" for large groups. Now that (bit of slight sarcasm) honestly makes me wonder what classic slavery would have looked like with agile methodologies.
|
# ¿ Jun 27, 2019 11:52 |
|
Pollyanna posted:Don’t even get me started on modern dev practices or doing anything more complicated than adding a field somewhere. Any PRs larger than 20 LOCs of changes are avoided like the plague. Next time, "No it's impossible to change this incrementally, here's 2000 lines of consumer changes".
|
# ¿ Jul 1, 2019 20:51 |
|
You're basically screwed anyway because MySQL cannot rollback changes that you make. It's therefore impossible to release a tested change onto the production database (and also ignoring replication issues). You can try to fake it against staging systems, but given the potential for data drift you can't be certain of the outcome. Your workflow should be: Durr idea, prototyping, develop, code review. Release is: Begin, savepoint, apply inserts/updates/deletes, loop of savepoint+integration test+rollback, then either a (full) abort or commit of just the savepoint, followed by the next change in the release. Lots of ORM was built because people needed a software workaround for a lack of database functionality. But it's 2019, you have better choices!
|
# ¿ Jul 1, 2019 23:17 |
|
Hollow Talk posted:Quick reminder that Postgres lets you use DDLs in transactions, so you can roll those back (except sequences).
|
# ¿ Jul 2, 2019 14:40 |
|
Xarn posted:I originally made that post as a response to your normal day job
|
# ¿ Jul 2, 2019 16:25 |
|
"We need team players who are fungible and invested in our goals of agile product development. Priorities have shifted after a recent meeting with customers, so (oldwork) had been put on hold until next year. It would be great if you could help out with (newwork) or spend some time bringing that team up to speed." Sorry I guess this is kinda bordering on a corporate thread post, but as a matter of fact I'm in a team that just used kanban to do this, namely setting everything to priority zero unless it's in the special list. This is both not agile, because it's not able to address incoming requests, nor is it good waterfall, because it refuses to acknowledge known limitations that must be addressed incrementally over the next 18mo. No :snip: yet. They have until about August.
|
# ¿ Jul 2, 2019 17:11 |
|
smackfu posted:“It doesn’t matter what work you do, all that matters is that you finish the work we committed you to.”
|
# ¿ Jul 11, 2019 00:22 |
|
I've felt old and cranky for... ever? I started as a college instructor and even students older than me were flippant, ungrounded, unstable messes. I had better personal finances than them, more control over my life and pursuit of happiness (not that these things weren't perfect, nor free from depression, but I had coping mechanisms). Aaaand my masters is in mathematical logic, so I guess some similarity to Aristotle is unavoidable. Honestly I see a lot of interns that are relatively mature. They listen and try to learn. I've rarely seen them argue uncontrollably for ridiculous new hotness "just because"; at least they typically try to give reasons when proposing a new technology. Where they fail is on scheduling, organization, any sense of estimation, and problem solving. Most of this seems to be cultural (my culture, in fact), which permits skating through college without much effort.
|
# ¿ Jul 17, 2019 15:18 |
|
BaronVonVaderham posted:wfh... Myself I hate working from home. I'm not set up for it and we have to use the work laptop. I'm not bothering with reconnecting keyboards and monitors, so it's not ergonomic. I have a desktop at work for a reason. (I'm more healthy at work, better/extant AC, I drink more water, take more frequent breaks, etc.).
|
# ¿ Jul 17, 2019 22:38 |
|
Remind them that this attitude leads to escalation fairly quickly. If they want sprint commitments to be completed in all cases, they will soon find that all tasks in the sprint are relatively small pieces of larger stories, or research and investigation tasks. If they want to reprioritize items during the sprint, then they must choose items to remove of equal value. ("Completion percentage" is just another metric, and we know what happens when you chase metrics.) Choice 1: Why are all these tasks "Determine method to do X?" (Answer: Because there's a 70% chance you'll steal 40% of our time, and this gives us sufficient buffer to still reach completion for a slightly relaxed definition of "done".) Choice 2: I want this prioritized, how many points? Seven? And we're halfway through a 15pt/person sprint, so Johnny I'm going to need you to drop Eggbox and we'll have to restart it next time. Please take the remaining time on Eggbox and add 1pt since we're wasting time to "task switching" by punting. Sadly this requires more than one person not
|
# ¿ Jul 23, 2019 00:39 |
|
You should write a modern, angry version called the Little Princess. Your manager plays the role of Queen of the Universe. "I command the universe and it doesn't listen " Our thing right now is making the switch to Kanban, people thinking it's still scrum, meanwhile pushing to get rid of the ~3hr/3wk scrum meetings, and wanting to do it all today. I'm not running either scrum or kanban here, but it sure seems like we need the meetings to discuss priority and to deal with the backlog grooming. Moreover we still need a retro to review process issues, and we should probably try it for a few cycles before we make radical changes. In any case, we have the rooms reserved; no need to cancel those meetings yet. Where we're wasting time is sitting in a room to discuss "priority", and instead of discussing the goals (or "stories" if you must), we spend 80% of the time listening to a few people argue over the complexity, related problems, effort, or solutions. When each task takes 5min to prioritize, we're not going to get through our 500 item backlog now are we? (Sorry I didn't mean to be so analytical up there. Your manager is dumm and should feel bad and we should make fun of her.)
|
# ¿ Jul 23, 2019 03:39 |
|
necrobobsledder posted:Packet fragmentation... Keeping queries under 1400 bytes is going to be tough for most programmers. They probably don't even know what a query is since they've been stuffing everything into an ORM library their entire career. The customer I remember claiming networking problems, that started collecting tons of metrics so they could get our engineers to waste weeks: They were convinced the troubles were caused by TCP retransmissions.
|
# ¿ Jul 24, 2019 22:48 |
|
You missed the correct answer. Check out a new copy of the repository and use `diff -rN` and/or vimdiff to compare the repo with the target code. Basically do git's loving job for it, then push normally.
|
# ¿ Jul 25, 2019 23:11 |
|
leper khan posted:The thing that made my life easier is having my IDE blame every file. Any cache written by one of those people I assume has a bug in it. I’ve had no false positives. It sounds like you're not taking your job responsibilities seriously. You see, "we have come to value: Individuals and interactions over processes and tools". We don't value negativity like 'git blame' and metrics that track who introduced bugs; those are processes. We want you you interact with those individuals to guide them to build better software. They are providing you with an opportunity to become a better developer on the team, so they deserve a promotion. When you are ready to be more agile, you'll be ready to mentor others in the future. I'm sure through collaboration we can respond to our changing needs and build better software together. We invite you to move ahead with us. Also
|
# ¿ Jul 26, 2019 23:34 |
|
Aaronicon posted:Welp, I'm now SAFE Agile Certified (lol no I didn't pay for it) mostly so I can continue arguing about it at work without the robots deflecting with 'well you just don't understand it'. s/SAFe/Agile/ if you don't recognize the boilerplate excuses.
|
# ¿ Aug 1, 2019 00:37 |
|
Protocol7 posted:There's probably some witty metaphor about it resembling a city's public transit map that I just can't quite reach (because I am an idiot, and even then, the diagram has broken my brain).
|
# ¿ Aug 7, 2019 22:54 |
|
Teams also decide to change. The team I'm on isn't really working on the things that were planned and discussed with me last year before I joined. I'm going to work on a few interesting things here in the next month or two, but after that I don't know if there will be much left. They're supposed to have an 18 month plan ready soon, which will at least establish 6mo of projects and/or focus, so I'm hoping to know by the end of this month if I'm working within my current team or taking to managers about doing other things inside this org. ... Or talking to other entirely different groups within or without the company. Lately I've been thinking that effective software development teams would be nothing more than consultants with short term individual ownership of delivery. Small modification or reconfiguration would be permitted, but there would be no endless refactoring and rebuilding. Code that doesn't solve a problem would simply be thrown out as unmaintainable, inextensible, etc. Any code that isn't modular enough would be dropped as "poor design". Everything in production use would be "this should do one thing well and here's the interface". Of course there are a handful of things that benefit from optimized, compiled, tightly coupled components, but it seems like most big balls of mud are too tightly coupled and don't provide those benefits (nor need to in most applications), instead just serving to over-complicate support and maintenance of the product.
|
# ¿ Aug 12, 2019 03:15 |
|
|
# ¿ Apr 28, 2024 08:13 |
|
Vulture Culture posted:There's no objective valuation of what "one thing well" even means as a cultural value. Why should the bulk of software design be based on subjective cultural values? Priority of requests can certainly be established via such subjective or statistical measures, but functionality is a matter for objectivity. You're not getting code approved that just doesn't work, independent of what you might claim as your cultural value. quote:"Designs" are accumulations of point-in-time tradeoffs that hopefully amount to something comprehensible. quote:Doing one thing well implies doing everything else poorly One of us hasn't passed this way before. I'm not sure which way or where, but I'm certain we've arrived at different places.
|
# ¿ Aug 12, 2019 23:22 |