|
Munkeymon posted:The only use case I can think of for writing even one line of code to serve a static file is to restrict access to the file in some way and I don't understand why people are weirded out by using nginx with Node (or Python or Ruby for that matter). it's not that nginx is bad. nginx is great. the problem is that anything is great behind nginx. if node really scaled, then nginix would be a nice-to-have instead of a requirement.
|
# ¿ Feb 15, 2017 13:40 |
|
|
# ¿ May 16, 2024 15:50 |
|
ROFLburger posted:Besides being poorly written, this article does a poor job explaining how exactly serving static files locks the thread that processes the event loop. Is it assuming you (or Express's middleware) are reading the files synchronously? the article is terrible. one of the selling points of node.js is that it's supposed to be good at serving a high number of requests that don't use a lot of cpu, and using asynchronous IO instead of threads to read files is supposed to be like, beneficial and poo poo. then you read more and people are like "don't use it to serve static files, it's not as good as etc. etc." it's a bunch of bs.
|
# ¿ Feb 18, 2017 03:40 |
|
biznatchio posted:window.devicePixelRatio will give you the ratio of device pixels to CSS pixels. If you use that to scale the image's size, you can make it map image pixels directly to monitor pixels. Careful though, as the documentation mentions, this value can change (i.e., if the user moves the window between monitors of different DPI levels), and there's no event to tell you the value has changed. Yeah, but the output image will still be scaled and have scaling artifacts unfortunately. It comes pretty close.
|
# ¿ Mar 2, 2017 02:00 |
|
biznatchio posted:Not in my experience on desktop Google Chrome on a high DPI screen. I can't vouch for other browsers, but in that setup, if you use the devicePixelRatio to scale an image's size down via CSS to the proper screen size, it will display properly imagepixel-to-monitorpixel. Let me mess with this today at work. There were three things I did that were different: 1. I used Chrome in high DPI mode. 2. Using PNG (smpte as the test pattern) instead of JPEG. 3. Using HTML5 canvas drawImage call.
|
# ¿ Mar 2, 2017 13:20 |
|
Xom posted:Thanks for the answers so far, y'all are really helpful! Element innerWidth/innerHeight and Font size don't seem to cause the js to block until the layout reflows (see https://gist.github.com/paulirish/5d52fb081b3570c81e3a for a list of properties that force layout reflows.) Setting the font size will update the layout visually, but since it doesn't force a reflow, if you immediately access innerWidth/innerHeight of something that ought to change as a result of the font size changing, it may not give you the desired value. What I did to work around this is call getBoundingClientRect() on the element. getBoundingClientRect() is slow, but it will immediately trigger a reflow if called on an element. Note that you don't actually have to use the rect returned from getBoundingClientRect(), you can still just get the innerHeight/innerWidth using JQuery later and it will have the right values if you do this.
|
# ¿ Mar 5, 2017 14:25 |
|
Honest Thief posted:The more I get into ES6 and typescript the more I get iffy about using the super() keyword in Javascript. I came from a Java and C# background and part of the reason why I wanted a change of pace was because I was getting tired of class inheritance programming, also because I kinda hated programming in Eclipse, but whenever I see a recent-ish "how to do x in react" article there's always classes. Well I mean, if you have to compare old school js prototype inheritance - which is essentially just pointers to methods - versus having legit subclass polymorphism and interfaces available in the language...
|
# ¿ Apr 20, 2017 13:36 |
|
Snak posted:Hey everyone, I'm trying to make a basic image filtering application that displays a local file, parses it, does some basic manipulation, and displays the result. You can totally resize an image with canvas and have it look smooth. You're probably better off keeping the actual canvas itself the same dimensions, and just using drawImage to scale (as it takes height and width parameters, so you can tell it to draw whatever size you want. If you change the actual width/height of the canvas, the canvas will be cleared, and you will have to call drawImage again - this will cause flickering unless you do something like keep a temporary canvas during the actual resize operation.
|
# ¿ Aug 17, 2017 01:36 |
|
Snak posted:I can't even get that to work. When I call the draw image function again, it just never draws. I think because it's trying to getContext again which returns null if you reuse it. If you can throw up a code sample or jsfiddle or something, I might be able to help.
|
# ¿ Aug 17, 2017 01:56 |
|
Snak posted:Yeah, I'll do that, thanks. While I'm working on that, I edited my post with more basic questions about JS. https://www.youtube.com/watch?v=m6PxRwgjzZw
|
# ¿ Aug 17, 2017 02:44 |
|
Thermopyle posted:You don't have to define any types or interfaces with TS. But then, what's the use other than being able to use some es6 sugar like ()=> ?
|
# ¿ Sep 21, 2017 07:23 |
|
mystes posted:I'm drawing to draw crosshairs on a canvas that are centered around the cursor by using the "difference" composite operation and drawing a horizontal and vertical rectangle. I thought that doing this twice would result in the original image, but for some reason this only seems to work for the vertical rectangle; the horizontal rectangles end up being gray after they are drawn the second time. Am I misunderstanding how this works? Or is there just a bug in my code? getBoundingClientRect may return subpixel coordinates. The "top" part of the bounding rect isn't returning an integer (e.g. on my machine it's returning 21.2) If you round the results in the getMousePos method, your example works fine: function getMousePos(canvas, evt) { var rect = canvas.getBoundingClientRect(); return { x: Math.round(evt.clientX-rect.left), y: Math.round(evt.clientY-rect.top) } } However, I would think that if you perform the draw operation twice with the difference operation, you should get the same image back. I don't know why the subpixel coordinate is making it so if you apply the same draw with difference twice, you get a different result. In general you're better off rounding all coordinates you give to canvas drawing - drawing using subpixel precision on canvas generally just looks bad even if it works.
|
# ¿ Dec 3, 2017 15:31 |
|
theLamer posted:
The purpose of bind is to keep current scope when function fires. If a mouse event handler fires, the scope will ordinarily be the document unless you bind it - so this seems to be the classical use of ".bind(this)".
|
# ¿ Dec 24, 2017 21:20 |
|
Lumpy posted:What is the advantage over object for the curious and lazy? with objects, -keys can clobber prototype methods -keys can only be strings -property/key order not guaranteed -no foreach method -have to use hasOwnProperty for existence check in case value is zero or null
|
# ¿ Feb 13, 2018 05:35 |
|
porksmash posted:Is there any sort of advanced object filtering library I can leverage instead of rolling my own? I have to implement something on the level of Newegg's Power Search, but also with user selectable comparisons or value ranges, dates and date ranges. I basically want the ability to dynamically generate SQL WHERE clauses and run it against an array of objects. The easiest way I've seen it done in any platform is to leverage the LINQ expression builder in C#, and make the JS just post an object consisting of the set of sorts and filters and having the server do everything. The server side code looks like this: https://gist.github.com/afreeland/6733381
|
# ¿ Apr 20, 2018 11:59 |
|
Revalis Enai posted:I'm trying to find and remove duplicates in an array. The array would have an item name and item code like: The way the original code is de-duplicating is that it's putting all the objects into an associative array as keys, and since you can't have duplicate values in an associate array - voila, the stuff is de-duplicated. However, it's not great for objects because you're only de-duplicating the key names. A better way of de-duplicating is to use Array.filter... e.g. code:
Bruegels Fuckbooks fucked around with this message at 00:07 on Apr 29, 2018 |
# ¿ Apr 29, 2018 00:04 |
|
Revalis Enai posted:Wow, the code got exactly what I was looking for and it ran fast. I'm trying to figure out how it actually does the filtering. From what I understand it's going through every item code(item[1]) to see if the object has its own property? If true, then it's set to false, and if false, it does (seen[k] =true), which I'm not sure what that does. So the deal is array.filter will return the items in the array that pass the test, and exclude the ones that fail. e.g. code:
So what we're really trying to do is "exclude all elements in the array that have already been seen." "Seen" is an object containing keys that already been used. It starts empty, but as items with keys are found, the keys will be added to the Seen object, and anything with a key that has already been added to the seen object so it won't pass the filter. code:
1. Get the key of the object (if you want to do whole object comparison, you could just JSON.stringify the object here). 2. If the object has not been 'seen', then it passes the test - but we add its key to the 'seen' object. 3. If the object has been seen, exclude it from the returned array.
|
# ¿ Apr 30, 2018 01:57 |
|
roomforthetuna posted:I suppose that works okay if you don't mind writing wrappers around things, eg, if foo in my example had been canvasContext.bezierCurveTo. The deeper problem really is that there's no way to define a type as "an array of exactly 2 elements" or, even better, "an array that has an even number of elements that is at least 2". You can specify an array of fixed length as a tuple in typescript. e.g. code:
e.g. code:
|
# ¿ May 20, 2018 07:26 |
|
nickmeister posted:Totaly newbie hear self-learning until I can start a course in the fall: 'document'' only exists in a browser context - like, if you press f12 on a website using your browser, and type 'document.getElementById(x)', the command will work. however, if you're doing this from like a node.js command line or something similar, that object will not exist. so how are you running the js?
|
# ¿ Jun 17, 2018 16:36 |
|
Chenghiz posted:Put a link to site A on site B and check its style using getComputedStyle to see if it's been visited or not. i thought you were joking but somebody actually made a library that does this. https://readwrite.com/2008/05/28/socialhistoryjs_see_which_site/
|
# ¿ Jun 19, 2018 14:09 |
|
FSMC posted:I think I started the loop discussion and my code when full circle. First, be extra sure you got rid of all the console.logs, because unless the console is open, accessing the console will throw an exception and it'll skip over all the code (you do realize that the IE11 debugger has traces and you can just log stuff to console by right clicking in the debugger and adding a tracepoint right?) Second, did you try setting the debugger's exception catching mode to 'break on all exceptions?' Third, you can open the debugger using the 'debugger' statement in js, so one possibility is putting that statement in the place that doesn't get called when debugger is closed, and moving that statement earlier and earlier in the program until the debugger starts opening, and then concluding that such-and-such line is the culprit. There are some exceptions that are handled differently when debugger is open/closed - an example time this bit me was in the IE10 days with 'use strict' - my js had 'use strict', and if your code has 'use strict', and there's a strict mode violation, in IE10, the code would not throw an exception or open the debugger, it would just do gently caress all, but it would work if the debugger is open! (Hmm, sounds like your problem come think of it)
|
# ¿ Jul 21, 2018 08:50 |
|
Dumb Lowtax posted:After seeing those performance tests on the previous page I'm going to try it anyway in my inner loops and run the profiler just to be sure whatever else I typed was being inlined to that same thing basically by the compiler At least one problem with flying by performance numbers is the McNamara fallacy, e.g. quote:The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide. There's a tendency when that quantitative facts (e.g. for loops written with 'i' are x amount faster) are juxtaposed with qualitative facts (foreach and for in loops are easier to maintain) to allow the more measurable thing to win. It can feel safer to rely on quantitative facts than qualitative facts (which can often be reduced to the level of anecdote). So let me throw down some anecdotes: a) The difference in your type of for loop is trivial in almost all for loops given data size and expected iterations. b) In places where the type of for loop actually makes a difference, you probably have a larger design issue at work that would be better addressed than optimizing your for loop. c) Scale matters - sure you might save 15ms by changing the loop type, but if the inner guts of the loop are taking 1 second in that time, who gives a gently caress about the iterator. Now as a programmer, there's a natural inclination to believe the micro benchmarks, but do be aware that kind of thinking is what kept the vietnam war going for 7 years.
|
# ¿ Jul 22, 2018 15:57 |
|
huhu posted:Add a sleep function? just to harp on this, javascript is single threaded (excepting web workers) so this will block the UI thread and any timers you have running with settimeout etc. You usually don't want to sleep like this. the settimeout equivalent does not block the main thread and just executes a function after x amount of time - however, x is just earliest the function can fire and it's not guaranteed that it will fire at x time, and settimeout is also subject to the timer resolution of your operating system (e.g. the timer might be accurate to ~15ms instead of just 1) also, Date().getTime() is a bad way of getting current milliseconds because at very best, it will be accurate to 1ms. performance.now() can be accurate to the microsecond, except it's not actually because browser vendors realized that returning the real time using performance.now() made machines vulnerable to fingerprinting, so the brilliant solution is that your different browser vendors will add random garbage to the timestamp returned by performance.now() to make machines less vulnerable to fingerprinting (https://developer.mozilla.org/en-US/docs/Web/API/Performance/now - to the point where I think performance.now() is less accurate than Date().getTime() on Firefox. in conclusion the internet was a mistake.
|
# ¿ Sep 4, 2018 18:19 |
|
FSMC posted:There are two use cases why want good html coverage and why data is in the html in the first place. The first is sets of accounts in ixbrl format stored as xhtml, so I need to test anything built over a variety of accounts created using various software. The second is I'm working on an extension that does it's magic by getting and adding data to different sites I have no control over, if I fix something on one site I want tests to make sure I haven't broken it on another site, etc. in general the point of unit tests should not be to verify the end to end functionality, the point of unit tests is verify that your code in isolation. the test you're talking about is more an end to end test or an integration test. the problems with the end to end in this perspective are a) End to end testing is slower than unit testing. b) End to end testing is less reliable, as you're at the whims of the external site. c) In your case, since you're testing stuff that's not under your control, the tests might break for reasons that aren't your fault or problem. while you can test live dom manipulation using jasmine/karma, you're probably better off not using unit test tools for this kind of test, and using browser automation tools like selenium.
|
# ¿ Sep 14, 2018 12:29 |
|
roomforthetuna posted:I guess this was something like i would generally prefer code:
|
# ¿ Nov 10, 2018 17:06 |
|
Joda posted:That's true. It adds another dimension to a keyword that already has too many meanings. Tbh I never actually used it until ES6 and TS when it started having the same function as in any other OO language. the arrow function changing the scope is a super-useful feature, but the issue is that arrow does so much, and it is relatively difficult to google. If Jimmy hasn't looked up what it does, it's entirely possible he will miss the scope implication of using the arrow function, and that'll take him down a rabbit hole for a few hours.
|
# ¿ Nov 12, 2018 14:34 |
|
akadajet posted:The guys I pass on when interviewing? Yes. dude if you've never had a back-end developer who doesn't give a gently caress about front end / indian contractor / new kid out of school ever have to do a minor update to a UI... like, it's probably the least damaging thing that they could possibly be working on.
|
# ¿ Nov 13, 2018 03:52 |
|
Joda posted:Don't you lose significant performance potentially? Also I assume there is no way to make wasm compatible with an ES5 platform i haven't run into a case where compiling back to es5 hurt performance short of "oh, there are more characters so it'll increase the load time of the page." emscripten can compile to either asm.js or webassembly, so you can provide both a web assembly version and an asm.js. asm.js can run in IE11 or Safari. the load times are a lot better for webassembly and IE11/Safari didn't run the asm.js stuff nearly as well as chrome did, but it might be possible to use it for simple use cases.
|
# ¿ Dec 1, 2018 17:12 |
|
Joda posted:I assume the entire point of using wasm proper (near native performance and threading) is lost if you compile to asm.js? it depends on your use case. even the asm.js demos can be pretty impressive from a performance perspective. the big issues with asm.js are: a) There isn't a great way of telling how much memory you're using with asm.js b) Initial load times If we want to compare web assembly vs. asm.js performance. https://blogs.unity3d.com/2018/09/17/webassembly-load-times-and-performance/ Webassembly has much better load times. The asm.js computational performance was already pretty good. Webassembly is an improvement but asm.js was already pretty good in that regard.
|
# ¿ Dec 1, 2018 18:27 |
|
Suspicious Dish posted:I write TypeScript because I find it worth the hassle, but 99% of these stupid bundler or compiler things are the most fragile pieces of software I've ever used, and manage not only to screw over your computer, but often browser devtools as well. Babel I've used because I needed to get angular to work in IE11. I imagine there are other ways. I did get my most recent project to the point where source maps were working and was able to debug in typescript yet have it serve bundled files last time I made a project - however, I think if you have a project where you bundle/minify and you're not getting working source maps for whatever reason, either ditch the bundler if it doesn't support source maps at all, or fix the configuration. I've seen people cling to using bundlers that don't emit source maps (like the one build into asp.net mvc, although apparently people have done projects to make it support it if you want to trust those?) and you can get 95% of the performance improvement just by enabling gzip compression for static files on the web server without having it gently caress your debugging.
|
# ¿ Dec 12, 2018 14:17 |
|
Suspicious Dish posted:no, I have source maps enabled. but source maps are extremely fragile, and they don't seem to work for breakpoint debugging (local variable names are still mangled, sometimes chrome just shows me the unmapped .js file anyway). error stacks as well seem to be completely untranslated through source maps. and how do i import modules through the JS console REPL? i've definitely gotten breakpoint debugging to work with source maps before, it's not an intrinsically impossible thing.
|
# ¿ Dec 13, 2018 02:42 |
|
porksmash posted:Typescript question - if I have a type defined as a list of specific strings, is there a way to create an interface that has optional keys for each possible value in that type? Example how about : code:
|
# ¿ Jan 1, 2019 10:42 |
|
quote:For some reason, this simply doesn't work. If you replace the "success" part with something like alert("success"), this works just fine. I, for the life of me, have no idea why Javascript doesn't like the version with the getElementById. Anyone know why? Debugger says: Invalid left hand sign in assignment. I think it's doing: code:
|
# ¿ Jan 30, 2019 02:12 |
|
smackfu posted:Interesting, I figured they were using stringify to validate the user-provided JSON but couldn’t figure out why they were then parsing and using the result. i've definitely made a utility method "copy" and stuck that JSON stringify/JSON parse in there. javascript is always pass by value, but if a variable return to an object, the "value" is a reference to the object - this means if you do something like slicing an array, and modify a property of the element in the sliced array, the element in the original array will also be modified, and you can stop that from happening by doing this.
|
# ¿ Feb 2, 2019 18:20 |
|
Nolgthorn posted:
so why are you doing this rather than json.stringify/json parse again?
|
# ¿ Feb 3, 2019 09:13 |
|
Obfuscation posted:I've been using [...Array(x)] as a hacky way to initialize empty arrays but this doesn't work in typescript. Is there a better way? edit nm
|
# ¿ Feb 16, 2019 18:00 |
|
Dumb Lowtax posted:Now the Chrome Developer Tools window has logpoints so you don't need Console.log at all.... as infrequently as you should have already been using it due to breakpoints and hovering over variable names for contents normally being the more appropriate way to monitor a running program man it seemed dumb to just set a conditional breakpoint and put console.log() in the statement like i've been doing to get the equivalent effect, but logpoints is apparently a new feature in 73, so I guess I can look forward to that soon.
|
# ¿ Mar 27, 2019 04:23 |
|
Suspicious Dish posted:breakpoints don't work correctly half the time for me. thanks source maps before typescript and this new node/webpack ecosystem, i didn't seem many projects that were more than, say, 200k ~ 300k of javascript - people would be like "gently caress this, all this code shouldn't be on the front end god have mercy let's not write any more javascript." nowadays i'll see ts projects run half a million ~ million+ loc not including jquery, react / whatever, and these take minutes to build. on the one hand it's cool that these tools make it practical to produce a working project with this much javascript, on the other hand, it's a crime against nature. if i had to build every time i did console.log i'd go insane.
|
# ¿ Mar 27, 2019 18:31 |
|
duz posted:It breaks IE. What more of a benefit do you need? typescript will let you use template literals with ie if you're so inclined.
|
# ¿ May 11, 2019 02:19 |
|
code:
infinity The reason? Math.min() = Infinity, (whereas Math.Min(undefined) = NaN) so Math.min.apply(null, []) becomes to Math.min() which equals infinity.
|
# ¿ May 21, 2019 19:15 |
|
|
# ¿ May 16, 2024 15:50 |
|
Dumb Lowtax posted:That too lol Yeah, the big problem when evaluating libraries is knowing what to reuse and what you should write yourself. There's always a cost when using a third party library - it can be difficult to assess how well you understand the problem the library is trying to solve, dependency management can be a huge bitch, and it can be difficult to assess how well tested the library is or whether it's handling cases that are pertinent to your problem domain. I would say new developers or developers fresh out of school are generally going to use every library under the sun and then get burned when a problem comes up that's difficult to explain because it in no way relates to your problem domain, but is just a consequence of including every library under the sun. This jades people and turns them into "gently caress it, write it myself" types - these types get burned when they try to fix issues or make their own (crypto, deserialization/serialization, authentication, XSS, input sanitization etc.) Judgement eventually gets better through experience.
|
# ¿ May 28, 2019 18:20 |