Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrotic
Aug 2, 2005
I owe my brother big time for this!

Empress Brosephine posted:

This works and it rocks, thank you so so much. That was way easier than I expected.


Another quick question; instead of writing about 200 if statements to see if a field is empty or not using lodash, can I do it using a switch case?

so right now i have:
code:
 if (isEmpty(req.body.field1) == false) {
          {
            values.push({
              attrib_id: 100,
              value: req.body.field1,
              response_id: results.insertForm.response_id,
            });
          }
is it possible to do:

code:
    switch (isEmpty(req.body.[x]) == false) {
          case  'field1': 
	 	   values.push({
              attrib_id: 100,
              value: req.body.field1,
              response_id: results.insertForm.response_id,
            });
	case 'field2': 
	case 'field3':
}
I realized while typing this out the easy thing would've been a loop that iterates over all the fields, but I wasn't smart and didn't name the fields to their DB id number.

Make a mapping of body field to attrib_id and iterate that.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
This Node.JS stuff is weeeeeird.

I'm toying around with React Router and dropping both a <Link to="/"> and an <a href="/"> into the document. Within the HTML code they look exactly the same (both plain <a href="/">), yet clicking the one generated by <Link> navigates immediately whereas the manually written <a> reloads the whole page. How does that even happen?

Obfuscation
Jan 1, 2008
Good luck to you, I know you believe in hell

Combat Pretzel posted:

This Node.JS stuff is weeeeeird.

I'm toying around with React Router and dropping both a <Link to="/"> and an <a href="/"> into the document. Within the HTML code they look exactly the same (both plain <a href="/">), yet clicking the one generated by <Link> navigates immediately whereas the manually written <a> reloads the whole page. How does that even happen?



The one made with Link has an event handler attached to it that captures your click and handles it as an app state change instead.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Ola posted:

Using tables for layout is also hell on accessibility, it makes it harder to navigate with screen readers, which reads tables expecting tabular data where they can read the column header for every cell etc. Again, if your product design is cool with that, go ahead. But you've surfed past thousands of sites that have successfully used proper layout elements without noticing, there are heaps and heaps of tooling and tutorials to help you learn it.
Now there's a decent reason to eschew tables, at last! Totally agree tables are worse for screen readers.

And while I kind of agree about tables being worse for adapting to different sized screens, that's more in theory than in practice, because every modern website except Amazon is a loving hideous frustrating travesty in every screen size, and would have been more usable made with Livejournal-era markup. (Because then on your phone screen or whatever you can zoom in if you want, whereas the poo poo people make now is like "oh you tried to zoom in? Just gonna rescale everything so the thing you were trying to zoom in on gets smaller then! Happy to help!")

It's not the modern tools I hate, it's the stupid poo poo people think is a good idea to do with them.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

barkbell posted:

sounds like you need a grid
Yes, I was arguing with the idea that a series of flexboxes provides most of the same capabilities as a table. I agree that grid is similar to table.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

roomforthetuna posted:

Yes, I was arguing with the idea that a series of flexboxes provides most of the same capabilities as a table. I agree that grid is similar to table.
So this discussion, especially the accessibility thing, has prompted me to try to migrate my table-layout to grid-layout.

There's a lot of weird poo poo involved in getting the behavior to be similar to a simple table, for example the column template I want was "auto auto 1fr auto" which wasn't a feature mentioned in the cheatsheet that was linked earlier.

And now I'm stuck with, what I want is for the div grid element to be the size of the specified grid area, but for its text content to be vertically centered. The only way I've found to do this that works is to put another div or span inside the grid div, which seems like quite the balls solution (and even getting that to work is painful) - is there something better?

Messing with margins on the grid element makes the box shrink, so the text is vertically centered but without the padding any more. Adding vertical-align:middle doesn't do anything.

Impotence
Nov 8, 2010
Lipstick Apathy
align-items: center;justify-content: center; ?

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Biowarfare posted:

align-items: center;justify-content: center; ?
I think they both make the *box* smaller (and then positioned where I want the content), where what I want is for the box to be full size and its content to be positioned like that, so its background color area is larger than the text covered area. Maybe now I describe it that way, sub-elements is the way to go though.

(Making the cell "display:flex" and the sub-element "margin: auto" does what I want.)

I think tables achieved this behavior by secretly being two elements per cell under the hood.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I suppose no one here knows an npm package to deal with a DB2 on z/OS, that a) doesn't involve wrapping jt400 and b) doesn't make Webpack poo poo itself like it does with the current odbc npm package I'm using for development (because of requires with a path set during runtime).

Lonely Wolf
Jan 20, 2003

Will hawk false idols for heaps and heaps of dough.
Look this is all very simple:

tables for layout = bad for accessibility

tables for tabular data = good for accessibility

to complicate things:

table markup comes with default aria (role=table) and default css (display: table;). Both can be used separately from table tags. You can have something with table markup that is not a table and something without table markup that looks like a table.

to complicate things:

browsers need to deal with old sites that used tables for layouts so they apply heuristics to detect these and remove the table role so they're presented to screenreaders and the like the same as a bunch of divs, negating the part about tables being bad for accessibility.

Sometimes this is overcorrective, negating the part about tables being good for accessibility, and you have to add the roles back if no really it's a table, I swear. (If you change the display property on small viewport breakpoints you trigger this and have to add the roles back, fun)

Biowarfare posted:

align-items: center;justify-content: center; ?

place-items: center;

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Say if I were to work on two Node.JS packages concurrently, because one is dependent on another, and I used npm-link (which the Internet suggests I should) to set things up, would the fswatcher in the main project, that kicks off a compile/webpack on file changes, also pick up on file changes in the dependency project?

Also, is the Javascript Map just a flat key value store, or does it internally some hash map or binary tree things to aid performance?

biznatchio
Mar 31, 2001


Buglord

Combat Pretzel posted:

Also, is the Javascript Map just a flat key value store, or does it internally some hash map or binary tree things to aid performance?

Per the Javascript spec, Map must be implemented in some way to provide performance better than O(n) -- for instance, like a hashmap; but it doesn't specify *exactly* how it should be implemented to reach that goal:

quote:

Map object must be implemented using either hash tables or other mechanisms that, on average, provide access times that are sublinear on the number of elements in the collection. The data structures used in this Map objects specification is only intended to describe the required observable semantics of Map objects.

Map does have some other required behaviors that mean it's not just a hashmap under the covers (its required to retain the insertion order of items, for example); but regardless it should in all cases be better than abusing an object property list as a map instead.

biznatchio fucked around with this message at 16:19 on Mar 12, 2021

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Oh good, thanks. I'm currently abusing(?) a bunch of Maps as look-up tables to reconstruct a history graph of amount of active manufacturing per day per priority level. I'd rather not have computation time scale exponentially when extending the date range.

Ape Fist
Feb 23, 2007

Nowadays, you can do anything that you want; anal, oral, fisting, but you need to be wearing gloves, condoms, protection.
Question for anyone using NestJS/Express, or really any server-side framework & PassportJS.

So, I create a standard auth system which spits out a JWT for use moving forward when someone has successfully logged in, but I don't know how to like set the auth as the permanent header for all requests on the client side moving forward?

I know how to, for example, set up some sort of client side JS to make post/gets, etc using the JWT as an access path into the back-end, but I just want to like:

Post login details to server -> Server says OK, hands out JWT. -> JWT is assigned to all header requests moving forward on the client-side for the duration of the session, or until such time as the token expires.

As I said I know you can make HTTP requests to the server via stuff like fetch() and pass it the JWT there, but like, surely I don't need to like build an entire SPA to hold, store, and manage the JWT for the entire duration of the session? I don't wan to build my admin pannel as an SPA (not yet anyway).

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Ape Fist posted:

Question for anyone using NestJS/Express, or really any server-side framework & PassportJS.

So, I create a standard auth system which spits out a JWT for use moving forward when someone has successfully logged in, but I don't know how to like set the auth as the permanent header for all requests on the client side moving forward?

I know how to, for example, set up some sort of client side JS to make post/gets, etc using the JWT as an access path into the back-end, but I just want to like:

Post login details to server -> Server says OK, hands out JWT. -> JWT is assigned to all header requests moving forward on the client-side for the duration of the session, or until such time as the token expires.

As I said I know you can make HTTP requests to the server via stuff like fetch() and pass it the JWT there, but like, surely I don't need to like build an entire SPA to hold, store, and manage the JWT for the entire duration of the session? I don't wan to build my admin pannel as an SPA (not yet anyway).

generally easiest thing to do is put the jwt in a cookie and have server middleware for passport auth using the cookie.

Ape Fist
Feb 23, 2007

Nowadays, you can do anything that you want; anal, oral, fisting, but you need to be wearing gloves, condoms, protection.

Bruegels Fuckbooks posted:

generally easiest thing to do is put the jwt in a cookie and have server middleware for passport auth using the cookie.

yeah I'm gonna try cookie it up.

edit: Yea you can set it & get it all from the server side with cookies. lovely.

Ape Fist fucked around with this message at 23:59 on Mar 14, 2021

WHERE MY HAT IS AT
Jan 7, 2011
I'm not a huge backend JS guy so I don't know if it's enabled by default these days but note you'll need to implement some form of CSRF protection if you're going to be handling auth that way.


Here's a weird one:

At work we have this ancient system that allows users to preview custom CSS changes in real time. It works by loading the home page with an iframe and then injecting a style element into it which it updates. So if I visit www.clientsite.com/style-editor it will try to load www.clientsite.com/ in an iframe. Since these are the same origin, normally this works. However, on one site in particular this is broken, with browsers throwing a DOMException and complaining about cross-origin frames when we try to access the iframe contents. This little snippet run in the console works fine elsewhere but is broken only for one client:
code:
var iframe = document.getElementsByClassName("StyleEditor-StyleEditorBase-iframe-3lMmB")[0];
if (window.origin === new URL(iframe.src).origin) {
    // this shouldn't throw an exception, right???
    iframe.contentWindow.document;
} else {
    console.log("this is cross origin, you can't touch it");
}
Anyone come across something like this before? The iframe is also being served with X-Frame-Options: SAMEORIGIN so it shouldn't even load if the origins don't match. Real head scratcher.

Tunicate
May 15, 2012

I have something that should be easy but i do not know enough to be able to implement it. I want to write a tampermonkey script so when I click on an etsy image with a filename like this

il_1140xN.2908605725_n4m5.jpg

It downloads the unresized version with a filename like this

il_fullxfull.2908605725_n4m5.jpg

Obviously replacing the name is just a regex but i do not know enough about javascript and web design to do the UI part (creating a button next to the image or whatever would be fine too).

Is there a good guide on how to do tampermonkey scripts, or some template i can rip off? It seems like this should be easy.

Roadie
Jun 30, 2013

Tunicate posted:

I have something that should be easy but i do not know enough to be able to implement it. I want to write a tampermonkey script so when I click on an etsy image with a filename like this

il_1140xN.2908605725_n4m5.jpg

It downloads the unresized version with a filename like this

il_fullxfull.2908605725_n4m5.jpg

Obviously replacing the name is just a regex but i do not know enough about javascript and web design to do the UI part (creating a button next to the image or whatever would be fine too).

Is there a good guide on how to do tampermonkey scripts, or some template i can rip off? It seems like this should be easy.

Very much off the top of my head (you'll probably have to mess with it a bunch depending on page details), you'd be looking at something like this:

JavaScript code:
[...document.querySelectorAll("img[src^='il_1140xN.']")].forEach((element) => {
  element.addEventListener('click', (e) => {
    e.stopPropagation()
    e.currentTarget.src = e.currentTarget.src.replace(
      /^il_1140xN\./,
      'il_fullxfull'
    )
  })
})
The querySelectorAll gets you every img element matching the selector*, then you add a click event listener that rewrites the src attribute on that element when it fires.

* The wrapping array/spread syntax is because of weird legacy stuff: the return value of querySelectorAll is array-like, but is missing a bunch of actual array stuff like forEach, so that converts it into a real array you can use normally for everything.

Impotence
Nov 8, 2010
Lipstick Apathy

Roadie posted:

* The wrapping array/spread syntax is because of weird legacy stuff: the return value of querySelectorAll is array-like, but is missing a bunch of actual array stuff like forEach, so that converts it into a real array you can use normally for everything.

Huh? I've done this for years

Roadie
Jun 30, 2013

Biowarfare posted:

Huh? I've done this for years



Oh, right, I was thinking of map, not forEach.

Still might as well do it so you don't have to go back and change it if you do whatever that takes a real array.

necrotic
Aug 2, 2005
I owe my brother big time for this!
The gotcha is that it's a live NodeList, not an array. Doesn't matter in one shots like that, but if you save the query selector call to a variable and reuse it the list of nodes may not be the same on future iterations.

xtal
Jan 9, 2011

by Fluffdaddy

Biowarfare posted:

Huh? I've done this for years



It's only worked that way the last few years

Dancer
May 23, 2011
Hi goons.

So, I'm a junior front end dev. I've had a total of 3 months of training, then managed to score a job right after, so feel free to explain things to me as if I'm stupid.

I am trying to get rid of a React warning, and I've managed to do it but I feel like my fix doesn't qualify as "good code", so I wanted to ask for advice.

The problem:

code:
  useEffect(() => {
    fetchGroups();
  }, []);
I get the react-hooks/exhaustive-deps warning.
If I add fetchGroups to the dependency array (and I add a silly console.log to the fetchGroups function) then I see that the function is triggered on every render, because although the function might be unchanged, its ref isn't.

This can be fixed with:

code:
  const fetchGroupsMemo = useCallback(fetchGroups, []);

  useEffect(() => {
    fetchGroupsMemo();
  }, []);
but I feel that this is a bad pattern, maybe? The compiler's inherent worry that "fetchGroups might change between renders" is not counteracted in any way by this. I just shift the dependency array from one hook to the other, and bypassing the array's function either way.

The in my mind best way to do this would be, because fetchGroups truly never changes, to move fetchGroups to a separate module and import it in (because that makes it non-mutable). Problem with this is I am using notistack to report whether the fetch has been successful, and notistack needs to be inside a component. This is also what's stopping me from moving the fetchGroups function inside the useEffect hook, because then I'd have to also add notistack's enqueueSnackbar as a dependency, and I feel like that would cause pure chaos in my app's memory, and would trigger who knows how many re-renders at weird times.

Osmosisch
Sep 9, 2007

I shall make everyone look like me! Then when they trick each other, they will say "oh that Coyote, he is the smartest one, he can even trick the great Coyote."



Grimey Drawer

Dancer posted:

Hi goons.

So, I'm a junior front end dev. I've had a total of 3 months of training, then managed to score a job right after, so feel free to explain things to me as if I'm stupid.

I am trying to get rid of a React warning, and I've managed to do it but I feel like my fix doesn't qualify as "good code", so I wanted to ask for advice.

The problem:

code:
  useEffect(() => {
    fetchGroups();
  }, []);
I get the react-hooks/exhaustive-deps warning.
If I add fetchGroups to the dependency array (and I add a silly console.log to the fetchGroups function) then I see that the function is triggered on every render, because although the function might be unchanged, its ref isn't.

This can be fixed with:

code:
  const fetchGroupsMemo = useCallback(fetchGroups, []);

  useEffect(() => {
    fetchGroupsMemo();
  }, []);
but I feel that this is a bad pattern, maybe? The compiler's inherent worry that "fetchGroups might change between renders" is not counteracted in any way by this. I just shift the dependency array from one hook to the other, and bypassing the array's function either way.

The in my mind best way to do this would be, because fetchGroups truly never changes, to move fetchGroups to a separate module and import it in (because that makes it non-mutable). Problem with this is I am using notistack to report whether the fetch has been successful, and notistack needs to be inside a component. This is also what's stopping me from moving the fetchGroups function inside the useEffect hook, because then I'd have to also add notistack's enqueueSnackbar as a dependency, and I feel like that would cause pure chaos in my app's memory, and would trigger who knows how many re-renders at weird times.

It depends a bit on context, but I think the case might be that there actually are dependencies that your useEffect should be relying on. If it's a completely stateless fetch, the current implementation is fine, but it's not all that necessary to fetch it in the component every time, and you might as well provide it from a context higher up.

Either way, I found this discussion on the linter rule pretty helpful when thinking about such things: https://github.com/facebook/react/issues/14920#issuecomment-471070149

Empress Brosephine
Mar 31, 2012

by Jeffrey of YOSPOS
drat 3 months to a job, congrats man, how'd you pull that off

SurgicalOntologist
Jun 17, 2004

Random question, and I'm not the frontend eng just looking for advice to pass along.

We have a video player app where the clips are snippets from a longer video. We're refactoring our pipeline so instead of generating thousands of little clips, we will pretend they are separate clips in the frontend but really just play the correct portion of the video.

The problem is the video scrubber (the timeline of the video for seeking). It seems there are lots of ways to modify it, show the start and end points of the clip, disable the other portions, but for a 20 second clip in a 2 hour video, it's not ideal. One of the devs is saying "scrubber takes ALL the video length, not the clip. that’s something we cannot avoid since its calculations (native player or vimejs) are based in the video metadata."

Anyone have any idea if there's a way to fake that? (we're using vimejs as the quote indicates)

go play outside Skyler
Nov 7, 2005


Do you really need a plug-in for that? Never heard of vimejs, but if you roll your own player, you can completely hide controls, and just draw your own scrubber based on start and end time. Calculations to know the percentage of playback based on video.currentTime is trivial.

fsif
Jul 18, 2003

Guess I have to ask: why are you structuring the video clips this way? At first blush, that sounds like a recipe for disaster.

You'd be forcing the user to load a larger video than they need and building out a complicated, bespoke scrubber/video player that will need to accept a set of sort of non-standard parameters.

oh no computer
May 27, 2003

edit: lol didn't read the thread, this is identical to a question further up

oh no computer fucked around with this message at 19:40 on Mar 16, 2021

SurgicalOntologist
Jun 17, 2004

go play outside Skyler posted:

Do you really need a plug-in for that? Never heard of vimejs, but if you roll your own player, you can completely hide controls, and just draw your own scrubber based on start and end time. Calculations to know the percentage of playback based on video.currentTime is trivial.

Yeah we could probably implement our own controls, I'll see if that's an option.


fsif posted:

Guess I have to ask: why are you structuring the video clips this way? At first blush, that sounds like a recipe for disaster.

You'd be forcing the user to load a larger video than they need and building out a complicated, bespoke scrubber/video player that will need to accept a set of sort of non-standard parameters.

Because generating literally thousands of clips in the backend (most of them overlapping, some nearly identical) is expensive and takes a long time.

We transcode our videos to a chunked streaming format which allows us to avoid loading the whole video (although it might try to load the rest of the video, it doesn't load the earlier part). Anways all the clips are from the same longer video so once that's loaded I would hope it's cached for any clips, although I don't know the low-level details.

We're only at the POC stage and it seems to work pretty well, minus the scrubber thing. I think it solves more problems than it creates, but we'll be able to validate that soon enough.

WHERE MY HAT IS AT
Jan 7, 2011

WHERE MY HAT IS AT posted:

I'm not a huge backend JS guy so I don't know if it's enabled by default these days but note you'll need to implement some form of CSRF protection if you're going to be handling auth that way.


Here's a weird one:

At work we have this ancient system that allows users to preview custom CSS changes in real time. It works by loading the home page with an iframe and then injecting a style element into it which it updates. So if I visit https://www.clientsite.com/style-editor it will try to load https://www.clientsite.com/ in an iframe. Since these are the same origin, normally this works. However, on one site in particular this is broken, with browsers throwing a DOMException and complaining about cross-origin frames when we try to access the iframe contents. This little snippet run in the console works fine elsewhere but is broken only for one client:
code:
var iframe = document.getElementsByClassName("StyleEditor-StyleEditorBase-iframe-3lMmB")[0];
if (window.origin === new URL(iframe.src).origin) {
    // this shouldn't throw an exception, right???
    iframe.contentWindow.document;
} else {
    console.log("this is cross origin, you can't touch it");
}
Anyone come across something like this before? The iframe is also being served with X-Frame-Options: SAMEORIGIN so it shouldn't even load if the origins don't match. Real head scratcher.

Self-quoting because I figured this out and it's as dumb as you think it is:

The client added their own tracking script to the page which was silently updating document.domain after the initial load, thus breaking the same-origin policy.

fsif
Jul 18, 2003

SurgicalOntologist posted:

Because generating literally thousands of clips in the backend (most of them overlapping, some nearly identical) is expensive and takes a long time.

We transcode our videos to a chunked streaming format which allows us to avoid loading the whole video (although it might try to load the rest of the video, it doesn't load the earlier part). Anways all the clips are from the same longer video so once that's loaded I would hope it's cached for any clips, although I don't know the low-level details.

We're only at the POC stage and it seems to work pretty well, minus the scrubber thing. I think it solves more problems than it creates, but we'll be able to validate that soon enough.

Gotcha. Yeah, obviously you know your project better than I do so take what I say with whatever appropriate amount of salt.

BUT I'd just caution that the solution of "rolling your own" scrubber might not be that easy. There's no native way to do it with Vime as far as I can tell. If your team is comfortable ditching Vime and want to use a low-level <video> tag to program it, there are still a lot of responsive and accessibility hurdles you'll want to make sure to clear. Also don't know if your chunking solution is in some ways dependent on Vime.

Should also note that in my (not particularly robust, but still existent!) experience with trying to pause a longer video at a certain spot to bring about an artificial end, the native JavaScript events weren't that precise. The clip would stop within a 1+ second range.

So you know, totally unsolicited and underinformed opinion here, but in the abstract a project like this could end up being one where you end up spending more developers and technical debt than you do just running the servers, heh.

SurgicalOntologist
Jun 17, 2004

Appreciate the advice! Probably we don't go too far beyond this POC any time soon anyway, we got bigger priorities before we would attempt to implement our own scrubber (if we do go that way).

go play outside Skyler
Nov 7, 2005


fsif posted:

Gotcha. Yeah, obviously you know your project better than I do so take what I say with whatever appropriate amount of salt.

BUT I'd just caution that the solution of "rolling your own" scrubber might not be that easy. There's no native way to do it with Vime as far as I can tell. If your team is comfortable ditching Vime and want to use a low-level <video> tag to program it, there are still a lot of responsive and accessibility hurdles you'll want to make sure to clear. Also don't know if your chunking solution is in some ways dependent on Vime.

Should also note that in my (not particularly robust, but still existent!) experience with trying to pause a longer video at a certain spot to bring about an artificial end, the native JavaScript events weren't that precise. The clip would stop within a 1+ second range.

So you know, totally unsolicited and underinformed opinion here, but in the abstract a project like this could end up being one where you end up spending more developers and technical debt than you do just running the servers, heh.

Not to derail, but I think what you experienced might be due to keyframes. Not sure you can just seek to any random position in the video like that, it might snap to h264 keyframes.

Come to think about it, I'd also suggest generating the clips on the server-side!

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
If you have a chunked format already, and your server can already pick out which exact chunks are needed when there user wants a clip from 1:16:37 to 1:15:37, you could probably even do it live on the server by generating a header and then appending the relevant chunks from the master file. No need to be storing each individual clip or anything.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.

SurgicalOntologist posted:

Because generating literally thousands of clips in the backend (most of them overlapping, some nearly identical) is expensive and takes a long time.

Going to say this out loud because nobody else has and I've had worse blind spot misses myself, but: if you split your source video into individual images per frame then ffmpeg will be able to spit out bespoke clips pretty quickly.

SurgicalOntologist
Jun 17, 2004

Jabor posted:

If you have a chunked format already, and your server can already pick out which exact chunks are needed when there user wants a clip from 1:16:37 to 1:15:37, you could probably even do it live on the server by generating a header and then appending the relevant chunks from the master file. No need to be storing each individual clip or anything.

Aha, thanks! That should be an easy solution.

Newf posted:

Going to say this out loud because nobody else has and I've had worse blind spot misses myself, but: if you split your source video into individual images per frame then ffmpeg will be able to spit out bespoke clips pretty quickly.

Yeah, it's what we're doing. But still, encoding the clips (at several bitrates) is slower than we'd like. If it were on demand, maybe, but we get them all at once and it seemed an easy part of our pipeline to get rid of. The clips increase the total video to process and store by as much as 20x, and that's only going to increase as we detect more and more types of clips (basically, we detect events from the video). It doesn't seem very scalable.

The clips aren't even the main use case (primary one is a timeline view where it makes sense that the scrubber shows the whole video, and clicking on a "clip" in the timeline just seeks to wherever you clicked) so if we have to sacrifice a little usability of the player it's not a big deal. Anyways, it's only POC stage so maybe we change our mind still. :shrug:

go play outside Skyler
Nov 7, 2005


Comedy option: deliver video video an mjpeg stream

Adbot
ADBOT LOVES YOU

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice

Empress Brosephine posted:

drat 3 months to a job, congrats man, how'd you pull that off

90% of getting a job is based on who you know. Being good at that job is just a helpful bonus.



Please note that I am not implying that Dancer is not good at their job, just pointing out that all forms of success are frequently based on luck or connections.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply