Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jabor
Jul 16, 2010

#1 Loser at SpaceChem
The projects we have interns work on are generally pie-in-the-sky "wouldn't it be nice if..." things that we'd kind of abstractly like but can't really justify putting engineers on. Most of the time it doesn't really go anywhere after the intern moves on, but sometimes it does. 8 weeks is a bit of a short time for an actual project though, especially when you consider that a couple of those weeks are going to be just ramping up on basic "how to do work" stuff.

Interns aren't there to grind tickets, don't get them to do that.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Motronic posted:

Interns take more of your teams time than they give. That's the entire idea.
Internships are definitely primarily about enriching education through work experience, but if you have an ambitious intern working on a well-scoped project it's not unreasonable to expect them to produce something of value.

You just can't depend on an internship to produce something useful and even if it doesn't that isn't necessarily a poor reflection of the intern.

asur
Dec 28, 2012
Give them a single project that is reasonably defined, that you have a design for or know pretty well because you're going to need to help them, and give them long time frames to design and then implementation. With the 8 week timeframe given I'd try to find a project that you think should take 4 weeks and give them 2-3 weeks for design and 5 weeks for implementation and deploy. If they need other people, which they should at minimum for review, then tell them to setup the meetings based on how your work functions. If they haven't been on a team before then this step may require more help.

Highly recommend that you setup heckins with an intern at least daily initially if not twice. When you get the feel for how independent they are and likely to ask questions this can change, but most interns, and new grads as well, will churn for way too long on problems because they think people will be annoyed if they ask their questions.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

asur posted:

Give them a single project that is reasonably defined, that you have a design for or know pretty well because you're going to need to help them, and give them long time frames to design and then implementation. With the 8 week timeframe given I'd try to find a project that you think should take 4 weeks and give them 2-3 weeks for design and 5 weeks for implementation and deploy. If they need other people, which they should at minimum for review, then tell them to setup the meetings based on how your work functions. If they haven't been on a team before then this step may require more help.

Highly recommend that you setup heckins with an intern at least daily initially if not twice. When you get the feel for how independent they are and likely to ask questions this can change, but most interns, and new grads as well, will churn for way too long on problems because they think people will be annoyed if they ask their questions.

Interns and new grads have issues transitioning to an environment where they don't need to do all of their own work and where there aren't easy answers to all of their problems. For some of them, it's the first such environment they've been in.

This is less a problem for PhD interns for hopefully obvious reasons. Though my current job doesn't target PhD candidates enough to consistently have those.

I've seen interns used to clear the backlog, and I've seen them used to work on speculative projects. In general, to some degree they're capable of both. They get more out of the projects than the backlog (it's a thing scoped for them to complete that they can talk about in detail in future interviews). There's also a reason that stuff has been rotting in backlog for so long. Only problem with the projects is you need a pile of them prepped ahead of time.

Hadlock
Nov 9, 2004

We've had a 100% success rate getting interns to create a slack bot to do X, where X is allowing non technical project managers to perform an engineering task, like spinning up a new environment, vs having to run a script for it manually, or check in on a long running task like crunching marketing numbers, i.e.

@marketingbot what's the ETA for today's numbers?

Marketingbot: average for last 7 days is 10:52am, today is projected to be 11:07am, trending slightly upwards since last weekend

Or whatever. If they finish that early then yeah pair code to grind out tickets, but at least they can put that project on their resume

Also plan on your intern being hung over most mornings, don't schedule meetings with them before 11am

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
isnt that just programmers in general? at least 10am maybe

Xarn
Jun 26, 2015
If you don't expect your intern to get anything done, why waste the money in hiring them?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Xarn posted:

If you don't expect your intern to get anything done, why waste the money in hiring them?

Part of the new grad recruitment funnel. You get a clear signal on which ones are promising.

ExcessBLarg!
Sep 1, 2001

Xarn posted:

If you don't expect your intern to get anything done, why waste the money in hiring them?
Much of it is just being engaged in the local technical scene: having a pipeline for fresh graduates, getting current students to spread their (hopefully good) experience to others word-of-mouth, etc.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Having an intern work on a pie-in-the-sky project vs knocking out easy tickets is a false dichotomy. I've had interns go through both and what ultimately brought them the most joy was seeing stuff they did get consumed and used, and witnessing the secondary effects and reactions to that. You can break off a part of a crazy project or pick tickets that would have that effect either way.

luchadornado
Oct 7, 2004

A boombox is not a toy!

Interns are a set of fresh, untainted eyes so their questions can be really good at challenging norms that perhaps shouldn't be norms. I also like the reminders to be patient and the challenge of explaining things to someone with a considerably different experience base.

They're absolutely an investment though and can feel like a drain, especially if your team is already struggling in other areas.

Motronic
Nov 6, 2009

luchadornado posted:

They're absolutely an investment though and can feel like a drain, especially if your team is already struggling in other areas.

I have had to turn down interns many years when I had understrength teams. Sometimes I was told "we know you need people so we're offering you interns" and then I have to explain (sometimes diplomatically, sometimes not) that they don't know what they hell they're talking about and shouldn't be in charge of interns if they think they're just cheap or free labor. And I also often remind them how long it take new hires to get up to speed and productive which often averages somewhere around "the entire length of time this intern will be here".

Xarn
Jun 26, 2015
IMO if you have an intern, you should align the tasks you give them with what interests them, to get the best (or, any) results. You are already paying them a lot of money, and investing team's time, so increasing their chances at success is the bets way to get the money back.

This usually means that you won't give them tickets to grind out (unless you are like us and just toss all bugs/ideas/wishlist items/etc into the issue tracker :v:), but rather some larger, self-contained project. You need the project to be something that you want to happen and is useful, but also something you don't mind if it fails.

Cugel the Clever
Apr 5, 2009
I LOVE AMERICA AND CAPITALISM DESPITE BEING POOR AS FUCK. I WILL NEVER RETIRE BUT HERE'S ANOTHER 200$ FOR UKRAINE, SLAVA
My org this year has decided that, because product's roadmap far outstrips our resourcing, all our interns will take up projects that are intended to be more or less production-ready by the end of their 12-weeks. I'm not sure whether this idea came from product or our management, but, needless to say, every engineer's response was uniformly some mixture of:
:aaaaa: :shepicide:

We've been assured that everyone up the chain understands that an intern project is likely to end up in a state that is not production-ready, if it doesn't crash and burn outright. I'm quite curious to see if that understanding persists 12 weeks from now! The project I've handed off to the intern I'm mentoring is relatively straightforward compared to a lot of the others, but there's still so drat much for them to ramp up on...

And, of course, mentoring the intern eats up a lot of my time when I've already been constantly jumping from extinguishing one fire to the next all year, unable to focus on the larger tasks and projects that I need to take on to put me on track for a promo (or just plain provide evidence that I'm fit for my current level, if management decides to ignore the constant incidental firefighting). The senior devs on my team are awesome and somehow find time to ideate and churn out smart proposal docs for cool improvements, while I'm getting pulled in every direction on "urgent" matters and can barely fit in time for the tickets I've taken up for the sprint. I guess I can try to better communicate to my manager what's all already on my plate and that, no, I really can't provide meaningful input on new items X, Y, and Z without sacrificing the other poo poo I'm trying to get done :shrug:

I'm planning on staying through the end of the year for the 401k match to vest, but if the job market is still favorable to engineers (perhaps less so this summer with recent layoffs?), I'll probably jump ship.

asur
Dec 28, 2012
401k match is never worth staying for. It's a trick that companies are well aware of and using to retain you.

Slimchandi
May 13, 2005
That finger on your temple is the barrel of my raygun
Can't really decide if this belongs in Oldie or Newbie advice, all thoughts welcome.

I've been working excusively in Python development for the last seven years, with a focus on data processing, pipelines and ETL, but also more traditional OOP for models, developed pretty complicated GUI tools in notebook, and even a JSON API or two in Lambda. I've taught myself a lot of software design principles, parts of the AWS data stack, and now lead a team of 3 others. I really enjoy learning about software development and principles, always felt it would be something I would enjoy.

My company got acquired a year by an org 10x the size about 12 months ago, but I can see my new role basically being the same data handling tasks over and over. I'm not learning any new technologies and the current stack is less modern than where I was before.

I've started looking for new roles, but something came up in my area that is asking for senior devs with significant Python experience along with knowledge of networking, webdev and C++. It feels like a big leap for me, these are areas I've never looked into. Are they domains that I could 'pick up' with the aid of a few good books? Or am I way out of my depth here? It sounds like a great opportunity for me to branch out beyond my normal four walls, just not sure I won't get laughed out of town in the interview.

Slimchandi fucked around with this message at 08:02 on Jun 29, 2022

thotsky
Jun 7, 2005

hot to trot
Webdev can mean anything, and seeing as pretty much anything is on the web these days it's unlike you would not qualify. Likewise, unless they're specifically asking for experience with low level network programming, like coming up with new drivers and protocols and poo poo your experience with pipelines might be plenty.

Learning a new language can be tough, but job ads always ask for more than most people can realistically deliver; I think there's a good chance you could find a niche where you never had to touch c++, but on the other hand you have enough experience to where picking it up in a reasonable time frame is realistic.

Edly
Jun 1, 2007

Cugel the Clever posted:

My org this year has decided that, because product's roadmap far outstrips our resourcing, all our interns will take up projects that are intended to be more or less production-ready by the end of their 12-weeks. I'm not sure whether this idea came from product or our management, but, needless to say, every engineer's response was uniformly some mixture of:
:aaaaa: :shepicide:

We've been assured that everyone up the chain understands that an intern project is likely to end up in a state that is not production-ready, if it doesn't crash and burn outright. I'm quite curious to see if that understanding persists 12 weeks from now! The project I've handed off to the intern I'm mentoring is relatively straightforward compared to a lot of the others, but there's still so drat much for them to ramp up on...

And, of course, mentoring the intern eats up a lot of my time when I've already been constantly jumping from extinguishing one fire to the next all year, unable to focus on the larger tasks and projects that I need to take on to put me on track for a promo (or just plain provide evidence that I'm fit for my current level, if management decides to ignore the constant incidental firefighting). The senior devs on my team are awesome and somehow find time to ideate and churn out smart proposal docs for cool improvements, while I'm getting pulled in every direction on "urgent" matters and can barely fit in time for the tickets I've taken up for the sprint. I guess I can try to better communicate to my manager what's all already on my plate and that, no, I really can't provide meaningful input on new items X, Y, and Z without sacrificing the other poo poo I'm trying to get done :shrug:

I'm planning on staying through the end of the year for the 401k match to vest, but if the job market is still favorable to engineers (perhaps less so this summer with recent layoffs?), I'll probably jump ship.

This post is setting off sirens in my head - please don't sacrifice yourself doing work that won't be recognized.

Allow me to suggest that, by taking on all this firefighting, you may not actually even be helping your team in the long run. For example: your manager might be unaware that there is an unsustainable problem because your heroic efforts are masking it; or maybe some of this firefighting would be promo-worthy for someone else on your team but you're not giving anyone else the opportunity.

Can you either redistribute some of this work to the rest of the team, or simply allow some things to fail? That could free up some of your time to take a step back and plan more sustainable, long term solutions.

All of the above only applies in a healthy org/with a good manager, although if that doesn't apply to your situation then don't wait until the end of the year to leave, start looking now.

Hadlock
Nov 9, 2004

You're gonna waive off a 20% raise now, to capitalize on a 4% gain in six months?

Ok

CPColin
Sep 9, 2003

Big ol' smile.

Slimchandi posted:

all thoughts welcome.

Nintendo mixed up the enemy names "Lanmola" and "Moldorm" between Zelda 1 and Zelda 3

Cugel the Clever
Apr 5, 2009
I LOVE AMERICA AND CAPITALISM DESPITE BEING POOR AS FUCK. I WILL NEVER RETIRE BUT HERE'S ANOTHER 200$ FOR UKRAINE, SLAVA

asur posted:

401k match is never worth staying for. It's a trick that companies are well aware of and using to retain you.
Yeah, the unvested portion is a minor fraction of my salary and would likely be more than made up for by any pay bump that came with a new position. I'm kind of just using it as an arbitrary deadline because there's a lot about the current job that I'm comfortable with, especially my teammates.

Edly posted:

Can you either redistribute some of this work to the rest of the team, or simply allow some things to fail? That could free up some of your time to take a step back and plan more sustainable, long term solutions.

All of the above only applies in a healthy org/with a good manager, although if that doesn't apply to your situation then don't wait until the end of the year to leave, start looking now.
This is what I'd like to pursue. Excepting me and a senior dev, my team is a gaggle of juniors and finding ways to ramp the latter up to take on some of these things that I get randomized with would be good for everyone. Our resourcing is pretty constrained, even with my manager doing a decent job from shielding us from the most wild product asks, but I'm hoping to discuss that further with him.

But, yeah, if the pressure we're under gets toxic or if my contributions were ignored, that would see me bowing out early rather than stick around and try to fight through it.

Edly
Jun 1, 2007
It sounds like you're not sure whether your contributions will be recognized; I think it's worth checking in with your manager explicitly on that today so you can potentially change course instead of waiting for a performance review. I'm saying this having learned that lesson the hard way; I left my last job over a similar situation and I'm still kinda bitter about it.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Hey I wanna fixate on the:

Slimchandi posted:

webdev and C++

Two things that are rarely mixed well right there. We can dissect your current situation all we want for shits and giggles, but I saw that and wondered if this new place was expecting some kind of unicorn. Like, maybe they're working on some embedded web gadget, but I'd still question throwing C++ at that kind of problem. Or maybe they have some fixation on performance. That fixation might be really bad, like in the "writing it in C++ will just make it faster no problem." It can but if this isn't a big-rear end place that has a history with that, I'm worried.

Slimchandi
May 13, 2005
That finger on your temple is the barrel of my raygun

Rocko Bonaparte posted:

Hey I wanna fixate on the:

Two things that are rarely mixed well right there. We can dissect your current situation all we want for shits and giggles, but I saw that and wondered if this new place was expecting some kind of unicorn. Like, maybe they're working on some embedded web gadget, but I'd still question throwing C++ at that kind of problem. Or maybe they have some fixation on performance. That fixation might be really bad, like in the "writing it in C++ will just make it faster no problem." It can but if this isn't a big-rear end place that has a history with that, I'm worried.

I believe they make caching networking hardware involving Squid and the web dev comes in on the admin portal/management side if that helps?

Dotcom656
Apr 7, 2007
I WILL TAKE BETTER PICTURES OF MY DRAWINGS BEFORE POSTING THEM
I wanted to toss a question to the thread, the last time I posted here I was working in an embedded role (in title at least) and needed to escape a boss that had turned sour because I got tossed on a solo project with no support and couldnt deliver results on time. Got told to "go re-evaluate my career"

I escaped to a new place, but this role is like a glorified sys-admin role despite being a senior software engineer. Were stuck writing Perl and Bash scripts and Python, once in a blue moon, and I am bored. I feel like Im losing my skills. I havnt written substantial new code in months or even code to fix bugs.

Theres no way to grow in this role, and repeated talks with my manager about moving to a team that does anything else relevent is always a "thats a few months away" type of discussion.

Ive been working trying to learn full-stack web development in my free time after work and on weekends, as well as goong through leetcode and algorithem text books, since it seems like the vast majority of jobs out there want experience with javascript, C#, golang and teraform etc. However I feel a bit stuck, if I go back to mid-level to try to get a job where I can actually grow as a backend web-dev Im going to take a significant pay cut.

Is there any advise anyone can offer? I feel like Ive hosed myself in my career. Im not experienced enough to go back to look at firmware jobs, (my last job was mostly writing a python desktop app by myself so I hardly got to work on real firmware) and I dont have the experience to get in the door with web-development elsewhere. I feel stuck.

Edit: I guess this post isnt a question, just posting into the void and feeling bad for myself.

Dotcom656 fucked around with this message at 18:08 on Jul 1, 2022

spiritual bypass
Feb 19, 2008

Grimey Drawer
"modernize" the Perl by rewriting it in a modern language, then add that to your resume. Doesn't matter if it's a good idea, your boss deserves it

Achmed Jones
Oct 16, 2004



idk your situation is sometimes called "gravy train" depending on how much you're paid and how stable the job is.

if you aren't a fan of that, then apply for new jobs. it sounds like you have experience that'd roughly match with devops positions. idk how much you know about that world, my advice is to do a bunch of interviews and if you bomb out on them, you have areas that you know you need to improve. this advice may not be super helpful if whatever region you're looking in is insanely small.

just apply, the downsides to doing so are very small

oliveoil
Apr 22, 2016
So how do you like to back-of-the-envelope CPU resource requirements?

Say you have a list of integers and you want to interate over the whole thing and you want to print out "my fizzbuzz" every time you iterate over an integer divisible by 1234.

E.g. something like

def myFizzbuzz(nums):
for n in nums:
if n % 1234 == 0:
print('my fizzbuzz')

And I tell you I have 500 trillion numbers for you to check, and ask you how many CPUs would be needed to check all 500 trillion numbers within three months.

I'd give a really rough estimate by assuming a CPU can execute 3 billion instructions per second and then basically guessing the instructions needed for each line.

So each iteration of the for loop would start with 3 instructions for the first line of the function:
- check if there is another element in the list, which I'd assume was a simple array to simplify more
- increment a pointer to the next element
- assign the next element 's value to n

The if line would have 3 more obvious instructions:
- the modulo operation, n % 1234 would be one instruction, even though I know x86 would probably need multiple instructions for moving n and 1234 to special registers, another instruction to perform the operation, another to move it back
- the result of the modulo operation being compared to 0 would be another instruction
- regardless of the comparison's result, there's a jump (either to or past the print() call)

The print() line would be method call, taking a bunch of instructions to set up a new stack frame, set up pointer state, etc. Then there would be 11 characters in the string to interate over and print. I'd get really sloppy here and just say the work we have to do is:
- invoke a method (stack frame setup, blah blah)
- print 11 characters, which means on some level comparison operations to check end of string and copy each character to some buffer, call some operating system API to render each character

For everything except the print call, I'd say there are actually going to be many times more CPU instructions executed than I counted, especially in a language like Python, and just multiply my counts by like 100. I honestly have no idea if 100 would be a reasonable multiplier or not. So I'd estimate 300 instructions for each iteration of the for line and another 300 instructions for each iteration of the if line.

For the print line, I'd just pull 1000 instructions out of my rear end for the cost to invoke a method and then I'd also pull out 1000 instructions for the OS API call per character. So 2000 instructions each time we call it. Then I'd ask if we can assume the input has a uniformly random distribution and if so, I'd say it gets executed about once every 1000 loop iterations (though really it would be more like once every 1234) iterations, so that the net work for the print line would about 2000 / 1000 = 2 instructions per input element.

Added to the 600 instructions for the first two lines, those 2 instructions are basically nothing so I'd throw them away and let my estimated work be 600 instructions per input element.

Since I assumed one CPU could do 3 billion instructions per second (3ghz) and all instructions take an equal amount of time, I'd say one CPU can process 5 million input elements per second. That means it would take 100 million CPU seconds to process the entire input list of 500 trillion numbers.

100 million CPU seconds / 86400 seconds per day = about 1157 CPU days.

Since we want the answer within three months, divide 1157 by 90 to get about 13. We'd want about 13 CPUs. Of course this is relying on an unstated assumption that all our CPUs have a single core, so really what it says is we'd want 13 CPU cores, but at least we know that we can probably do this task with like 1 or 2 machines.

Now I imagine trying to apply this process to figuring out how much I'd have to spend on compute to handle some level of request load on a web application and it seems like a massive pain in the rear end. I think I'd go crazy before I'd start to consider how many instructions the framework is adding, what my library calls are adding, etc.

Assuming this or that database call takes 1-1000ms or some other API call takes some other amount of time and that the time spent processing a request is dominated by such calls is probably how I'd simplify things but then that doesn't seem to help me figure out how much to budget for request-handling compute.

Am I doing this completely wrong?

I guess nothing beats actually prototyping something, hitting it with some test load and seeing how much compute you really need to buy in order to handle some amount of requests but it would be nice if I could at least have a rough estimate up front.

oliveoil fucked around with this message at 18:43 on Jul 1, 2022

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
it's a decent enough question in c-land and assembler-land as opposed to python-land. you should really do empirics in python-land

oliveoil
Apr 22, 2016

bob dobbs is dead posted:

it's a decent enough question in c-land and assembler-land as opposed to python-land. you should really do empirics in python-land

I was afraid of that. Oh well. Thank you!

Although, I just had another thought, if it works for c and assembler, maybe we can try to treat it as c or assembler and then add a multiple afterward 🤔

Quick googling suggests 25x might be a reasonable multiplier for python vs C:
http://scribblethink.org/Computer/javaCbenchmark.html

For Java, treat it like a C estimate and maybe multiply by 2?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
When talking about evaluations on a set like 500 trillion points, the most important factor in performance on modern architectures is very like cache locality. Or distribution across physical machines.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


If you're doing numbers in your head then 25x is fine. I'd probably make it 20x just cause that's easier to multiply by, but it doesn't matter that much. However, the actual ratio is going to vary a lot across different programs, so you're back in empirical land if you want anything more precise than that.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

oliveoil posted:

So how do you like to back-of-the-envelope CPU resource requirements?

Say you have a list of integers and you want to interate over the whole thing and you want to print out "my fizzbuzz" every time you iterate over an integer divisible by 1234.

E.g. something like

def myFizzbuzz(nums):
for n in nums:
if n % 1234 == 0:
print('my fizzbuzz')

And I tell you I have 500 trillion numbers for you to check, and ask you how many CPUs would be needed to check all 500 trillion numbers within three months.

I'd give a really rough estimate by assuming a CPU can execute 3 billion instructions per second and then basically guessing the instructions needed for each line.

So each iteration of the for loop would start with 3 instructions for the first line of the function:
- check if there is another element in the list, which I'd assume was a simple array to simplify more
- increment a pointer to the next element
- assign the next element 's value to n

The if line would have 3 more obvious instructions:
- the modulo operation, n % 1234 would be one instruction, even though I know x86 would probably need multiple instructions for moving n and 1234 to special registers, another instruction to perform the operation, another to move it back
- the result of the modulo operation being compared to 0 would be another instruction
- regardless of the comparison's result, there's a jump (either to or past the print() call)

The print() line would be method call, taking a bunch of instructions to set up a new stack frame, set up pointer state, etc. Then there would be 11 characters in the string to interate over and print. I'd get really sloppy here and just say the work we have to do is:
- invoke a method (stack frame setup, blah blah)
- print 11 characters, which means on some level comparison operations to check end of string and copy each character to some buffer, call some operating system API to render each character

For everything except the print call, I'd say there are actually going to be many times more CPU instructions executed than I counted, especially in a language like Python, and just multiply my counts by like 100. I honestly have no idea if 100 would be a reasonable multiplier or not. So I'd estimate 300 instructions for each iteration of the for line and another 300 instructions for each iteration of the if line.

For the print line, I'd just pull 1000 instructions out of my rear end for the cost to invoke a method and then I'd also pull out 1000 instructions for the OS API call per character. So 2000 instructions each time we call it. Then I'd ask if we can assume the input has a uniformly random distribution and if so, I'd say it gets executed about once every 1000 loop iterations (though really it would be more like once every 1234) iterations, so that the net work for the print line would about 2000 / 1000 = 2 instructions per input element.

Added to the 600 instructions for the first two lines, those 2 instructions are basically nothing so I'd throw them away and let my estimated work be 600 instructions per input element.

Since I assumed one CPU could do 3 billion instructions per second (3ghz) and all instructions take an equal amount of time, I'd say one CPU can process 5 million input elements per second. That means it would take 100 million CPU seconds to process the entire input list of 500 trillion numbers.

100 million CPU seconds / 86400 seconds per day = about 1157 CPU days.

Since we want the answer within three months, divide 1157 by 90 to get about 13. We'd want about 13 CPUs. Of course this is relying on an unstated assumption that all our CPUs have a single core, so really what it says is we'd want 13 CPU cores, but at least we know that we can probably do this task with like 1 or 2 machines.

Now I imagine trying to apply this process to figuring out how much I'd have to spend on compute to handle some level of request load on a web application and it seems like a massive pain in the rear end. I think I'd go crazy before I'd start to consider how many instructions the framework is adding, what my library calls are adding, etc.

Assuming this or that database call takes 1-1000ms or some other API call takes some other amount of time and that the time spent processing a request is dominated by such calls is probably how I'd simplify things but then that doesn't seem to help me figure out how much to budget for request-handling compute.

Am I doing this completely wrong?

I guess nothing beats actually prototyping something, hitting it with some test load and seeing how much compute you really need to buy in order to handle some amount of requests but it would be nice if I could at least have a rough estimate up front.

I just bullshit my back of the envelope estimates if I need them, prototype, and test it empirically. Just treat the estimate like a fermi problem - pick some numbers that go into the estimate, document what they are, and even if they're wrong, just working out the problem fermi style should give you a better idea of where to look for the real answer. Shoot for horizontal scalability in your architecture if you're worried about underspecifying the hardware.

Dotcom656
Apr 7, 2007
I WILL TAKE BETTER PICTURES OF MY DRAWINGS BEFORE POSTING THEM

Achmed Jones posted:

idk your situation is sometimes called "gravy train" depending on how much you're paid and how stable the job is.

if you aren't a fan of that, then apply for new jobs. it sounds like you have experience that'd roughly match with devops positions. idk how much you know about that world, my advice is to do a bunch of interviews and if you bomb out on them, you have areas that you know you need to improve. this advice may not be super helpful if whatever region you're looking in is insanely small.

just apply, the downsides to doing so are very small

I get paid okay. But its nothing compared to what people in here are making and posting.

On the few interviews I do get, I get as far as "so heres the personal project I worked on to get experience working with web APIs and websockets", then they ask me if Ive ever had to scale it to thousands of users. Which of course I havn't because its just my personal project and it fizzles out after that.

So yeah. Im applying. It just loving sucks when you realize your 5 years of experience were in poo poo that doesnt matter and you know nothing that everyone wants you to know.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Dotcom656 posted:

I get paid okay. But its nothing compared to what people in here are making and posting.

On the few interviews I do get, I get as far as "so heres the personal project I worked on to get experience working with web APIs and websockets", then they ask me if Ive ever had to scale it to thousands of users. Which of course I havn't because its just my personal project and it fizzles out after that.

So yeah. Im applying. It just loving sucks when you realize your 5 years of experience were in poo poo that doesnt matter and you know nothing that everyone wants you to know.

You can scale test your stuff without having live users.

asur
Dec 28, 2012

oliveoil posted:

I was afraid of that. Oh well. Thank you!

Although, I just had another thought, if it works for c and assembler, maybe we can try to treat it as c or assembler and then add a multiple afterward 🤔

Quick googling suggests 25x might be a reasonable multiplier for python vs C:
http://scribblethink.org/Computer/javaCbenchmark.html

For Java, treat it like a C estimate and maybe multiply by 2?

Did someone actually ask you this question? There's so many things wrong with it including the focus on CPU as opposed to all limitations.

oliveoil
Apr 22, 2016
Thanks, everyone! That helps me feel better about not seeing any obvious alternative to either a really rough back of the envelope or prototype + test effort.

asur posted:

Did someone actually ask you this question? There's so many things wrong with it including the focus on CPU as opposed to all limitations.

Yeah, I got asked this years ago, when interviewing out of college. The actual question was for iterating over a different data structure but still just as straightforward. Then I was asked to explain the worst-case time complexity of my implementation . Then the next question was "say I have <some huge number> of nodes, could I do it reasonably quickly on a single CPU? About how long would it actually take as you've written it?"

And that was the whole question. I wasn't really satisfied with the lack of precision in my answer, but I also figured it could actually be useful for ballparking real world resource costs if I could do it a little more accurately.

oliveoil fucked around with this message at 19:13 on Jul 2, 2022

Artemis J Brassnuts
Jan 2, 2009
I regret😢 to inform📢 I am the most sexually🍆 vanilla 🍦straight 📏 dude😰 on the planet🌎
Did they want actual clock cycles approximations? Is that a thing? If someone in an interview asks me how long a binary search takes on a single cpu, my answer is just “log N”.

Granted, I’m in game dev so we just throw best practices at a problem and profile it once our frame rate drops; not sure if the polo-n-khakis crowd handles things differently.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


I would've politely asked them whether they thought having multiple CPUs would help.

Adbot
ADBOT LOVES YOU

oliveoil
Apr 22, 2016

Artemis J Brassnuts posted:

Did they want actual clock cycles approximations? Is that a thing? If someone in an interview asks me how long a binary search takes on a single cpu, my answer is just “log N”.

Yeah this was actually a practice interview from a friend of an old friend who worked at a FANG at the time. He went through my program and seemed to casually come up with a number of instructions for each line and told me I could assume a 3ghz processor could execute 3 billion instructions per second.

He told me he basically wanted to know if we could sequentially process the specified amount of data or if we would need to do the work in parallel across multiple processes to finish it all in a reasonable amount of time.

ultrafilter posted:

I would've politely asked them whether they thought having multiple CPUs would help.

Maybe that's what he was hoping to hear, I only got a few minutes with him afterward and according to him he'd neither give me good nor bad marks if it was a real interview. Which sounds like a polite way of saying he wouldn't have recommended I be hired if it was a real interview. 😅

Maybe the rest of the interview was going to be about how I would split up the work across multiple machines but I spent so long on that part of the question that we ran out of time.

oliveoil fucked around with this message at 19:41 on Jul 2, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply