Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Munkeymon posted:

The only use case I can think of for writing even one line of code to serve a static file is to restrict access to the file in some way and I don't understand why people are weirded out by using nginx with Node (or Python or Ruby for that matter).

it's not that nginx is bad. nginx is great. the problem is that anything is great behind nginx. if node really scaled, then nginix would be a nice-to-have instead of a requirement.

Adbot
ADBOT LOVES YOU

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

ROFLburger posted:

Besides being poorly written, this article does a poor job explaining how exactly serving static files locks the thread that processes the event loop. Is it assuming you (or Express's middleware) are reading the files synchronously?

the article is terrible. one of the selling points of node.js is that it's supposed to be good at serving a high number of requests that don't use a lot of cpu, and using asynchronous IO instead of threads to read files is supposed to be like, beneficial and poo poo. then you read more and people are like "don't use it to serve static files, it's not as good as etc. etc." it's a bunch of bs.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

biznatchio posted:

window.devicePixelRatio will give you the ratio of device pixels to CSS pixels. If you use that to scale the image's size, you can make it map image pixels directly to monitor pixels. Careful though, as the documentation mentions, this value can change (i.e., if the user moves the window between monitors of different DPI levels), and there's no event to tell you the value has changed.

Yeah, but the output image will still be scaled and have scaling artifacts unfortunately. It comes pretty close.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

biznatchio posted:

Not in my experience on desktop Google Chrome on a high DPI screen. I can't vouch for other browsers, but in that setup, if you use the devicePixelRatio to scale an image's size down via CSS to the proper screen size, it will display properly imagepixel-to-monitorpixel.

Example:

Raw Image:


Chrome on a High DPI display -- top image is a raw <img> element, bottom image sets CSS to the image's naturalWidth and naturalHeight divided by window.devicePixelRatio:


The scaled image is pixel perfect.

After zooming the browser to 110% and reloading, to make sure the devicePixelRatio wasn't a whole number:


The scaled image is still pixel perfect.

Code below:

code:
    <body>

        <div>Device Pixel Ratio: <span id="spnPixelRatio"></span>.
        <br>
        <br>
        Raw image: <img src="image.gif"/>
        <br>
        <br>
        Scaled image: <img src="image.gif" id="imgScaled"/>


        <script type="text/javascript">

        window.addEventListener("load", function () {
            var dpc = window.devicePixelRatio;

            document.getElementById("spnPixelRatio").innerText = dpc;

            var imgScaled = document.getElementById("imgScaled");
            imgScaled.style.width = (+imgScaled.naturalWidth / dpc) + "px";
            imgScaled.style.height = (+imgScaled.naturalHeight / dpc) + "px";
        });

        </script>
    </body>
edit: Also tried resizing by setting CSS zoom instead of width/height, the result also came out pixel perfect. Code:
code:
imgScaled.style.zoom = ((1 / dpc)*100) + "%";

Let me mess with this today at work. There were three things I did that were different:
1. I used Chrome in high DPI mode.
2. Using PNG (smpte as the test pattern) instead of JPEG.
3. Using HTML5 canvas drawImage call.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Xom posted:

Thanks for the answers so far, y'all are really helpful!

jQuery question: Is it possible for there to be a race condition in the following, where myDiv is a jQuery selection?
code:
myDiv.css('font-size', '10px');
return myDiv.innerHeight();
Like maybe the computer is really slow, and there's a lot of text, with some 1MB inline images with height:1em, and it's all flowing around some floating element; will jQuery block until the correct innerHeight is available? (Assume that the images were already fully loaded before our code set the font-size.)

Anybody else excited about Firefox 52 having the async keyword being scheduled for release on Tuesday? I'm rewriting Cardery.js to use it.
I remember having a similar problem - if you try to access like innerHeight/innerWidth immediately after setting the font size style, you may get the values that don't reflect the font size change. (It doesn't seem to be JQuery specific).

Element innerWidth/innerHeight and Font size don't seem to cause the js to block until the layout reflows (see https://gist.github.com/paulirish/5d52fb081b3570c81e3a for a list of properties that force layout reflows.) Setting the font size will update the layout visually, but since it doesn't force a reflow, if you immediately access innerWidth/innerHeight of something that ought to change as a result of the font size changing, it may not give you the desired value.

What I did to work around this is call getBoundingClientRect() on the element. getBoundingClientRect() is slow, but it will immediately trigger a reflow if called on an element. Note that you don't actually have to use the rect returned from getBoundingClientRect(), you can still just get the innerHeight/innerWidth using JQuery later and it will have the right values if you do this.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Honest Thief posted:

The more I get into ES6 and typescript the more I get iffy about using the super() keyword in Javascript. I came from a Java and C# background and part of the reason why I wanted a change of pace was because I was getting tired of class inheritance programming, also because I kinda hated programming in Eclipse, but whenever I see a recent-ish "how to do x in react" article there's always classes.
So are ES6 classes nothing but syntax sugar and should I just get used to them, because they're here to stay, and somehow keep reminding myself that Java/C# classes aren't the same as JS?

Well I mean, if you have to compare old school js prototype inheritance - which is essentially just pointers to methods - versus having legit subclass polymorphism and interfaces available in the language...

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Snak posted:

Hey everyone, I'm trying to make a basic image filtering application that displays a local file, parses it, does some basic manipulation, and displays the result.

The biggest problem that I'm having is in displaying the result. Using HTML5's file API, it's easy to display the local file, then use a FileReader to get the image data and redraw it on a canvas to display the result. The problem is that the source image is displayed as an img tag, which handles dynamic resizing just fine, but the result is drawn in a canvas, and I can't figure out how to get it to play nice with dynamic resizing at all. I have it set up to have the canvas match the size of the source image when it's drawn, and that works fine, but I can't get its size to ever update after that. It's a huge mess.

The best way to do all of this is if I could turn the post-processed img object into a new file object and provide it as a source to an img tag. Then the input and output of the app would be displayed and formatted using the exact same mechanisms and be completely consistent.

Is there a way to do this? Obviously I can't create a local file on the user's system without their explicit action, so I guess what I'm asking is, is there a way for the locally running JS to serve a file to an img tag? It sounds kind of crazy when I say it like that, but I'm getting desperate here.

You can totally resize an image with canvas and have it look smooth.

You're probably better off keeping the actual canvas itself the same dimensions, and just using drawImage to scale (as it takes height and width parameters, so you can tell it to draw whatever size you want.

If you change the actual width/height of the canvas, the canvas will be cleared, and you will have to call drawImage again - this will cause flickering unless you do something like keep a temporary canvas during the actual resize operation.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Snak posted:

I can't even get that to work. When I call the draw image function again, it just never draws. I think because it's trying to getContext again which returns null if you reuse it.

God I hate frontend poo poo.

Okay, so I should make the canvas some theoretical maximum for my site, and then draw whatever size I need? I will give that a shot, thanks.

If you can throw up a code sample or jsfiddle or something, I might be able to help.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Snak posted:

Yeah, I'll do that, thanks. While I'm working on that, I edited my post with more basic questions about JS.

https://jsfiddle.net/8t1zuewg/

Good news and bad news: While trying to figure out why the output of jsfiddle is different from my local output, I fixed the dynamic resizing just fine. The problem persists, however, that I can't make the image the correct size in the first place. In JS fiddle, the image is drawn smaller than it should be, while in my regular browser, the image is drawn larger than it should be.

The sizing is done here, lines 30-37
code:
var sourceImage = document.getElementById("inputImage");
    
console.log("getContext");
var context = canvas.getContext("2d");
    
console.log("drawImage");
//context.drawImage(sourceImage, 0, 0, canvas.width, canvas.height);
context.drawImage(sourceImage, 0, 0, sourceImage.clientWidth, sourceImage.clientHeight);
Please forgive all of the horrible console.log outputs, I can't use netbeans debugger and chrome's devtools at the same time, so I did a bunch of bad console outputs so I could see what was going on, since netbeans wasn't showing me errors that chrome was.


edit: This new problem is because I'm now using CSS to change the display size of the canvas, which also correctly adjusts the displays size of what's drawn on the canvas, BUT when you draw to the canvas, you're drawing on the actual size of the canvas. I think.

So if I just make the canvas the actual size of the image and draw on the whole thing and change its display size with css, that should work.

edit again: Which it did.

Thanks so much. I really need to get an actual rubber ducky. Most of my confusion stemmed from mixing up the canvas's css width with it's DOM width property. Once I realized that, it all made sense.

https://www.youtube.com/watch?v=m6PxRwgjzZw

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Thermopyle posted:

You don't have to define any types or interfaces with TS.

But then, what's the use other than being able to use some es6 sugar like ()=> ?

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

mystes posted:

I'm drawing to draw crosshairs on a canvas that are centered around the cursor by using the "difference" composite operation and drawing a horizontal and vertical rectangle. I thought that doing this twice would result in the original image, but for some reason this only seems to work for the vertical rectangle; the horizontal rectangles end up being gray after they are drawn the second time. Am I misunderstanding how this works? Or is there just a bug in my code?

https://jsfiddle.net/d3ao82v6/

Edit: This seems to work on firefox, but not in chrome?

getBoundingClientRect may return subpixel coordinates. The "top" part of the bounding rect isn't returning an integer (e.g. on my machine it's returning 21.2)

If you round the results in the getMousePos method, your example works fine:

function getMousePos(canvas, evt) {
var rect = canvas.getBoundingClientRect();
return {
x: Math.round(evt.clientX-rect.left),
y: Math.round(evt.clientY-rect.top)
}
}

However, I would think that if you perform the draw operation twice with the difference operation, you should get the same image back. I don't know why the subpixel coordinate is making it so if you apply the same draw with difference twice, you get a different result.

In general you're better off rounding all coordinates you give to canvas drawing - drawing using subpixel precision on canvas generally just looks bad even if it works.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

theLamer posted:

code:
this.onMouseEnter = this.onMouseEnter.bind(this);
this.onMouseLeave = this.onMouseLeave.bind(this);
Am I the only one who thinks this kind of paradigm is bad?

EDIT: As in, either use bind everywhere or don't use 'this.method'.

The purpose of bind is to keep current scope when function fires. If a mouse event handler fires, the scope will ordinarily be the document unless you bind it - so this seems to be the classical use of ".bind(this)".

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Lumpy posted:

What is the advantage over object for the curious and lazy?

with objects,

-keys can clobber prototype methods
-keys can only be strings
-property/key order not guaranteed
-no foreach method
-have to use hasOwnProperty for existence check in case value is zero or null

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

porksmash posted:

Is there any sort of advanced object filtering library I can leverage instead of rolling my own? I have to implement something on the level of Newegg's Power Search, but also with user selectable comparisons or value ranges, dates and date ranges. I basically want the ability to dynamically generate SQL WHERE clauses and run it against an array of objects.

The easiest way I've seen it done in any platform is to leverage the LINQ expression builder in C#, and make the JS just post an object consisting of the set of sorts and filters and having the server do everything.

The server side code looks like this:
https://gist.github.com/afreeland/6733381

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Revalis Enai posted:

I'm trying to find and remove duplicates in an array. The array would have an item name and item code like:
code:
[[item1,9090],[item2,9090],[item3,9010]]
I found a code that would go through the array and push out unique value, but I don't fully understand how it works.

code:
function removeDuplicate(arr){
     var i,
  len=arr.length,
  result = [],
  obj = {}; 
  for (i=0; i<len; i++)
  {
  obj[arr[i][1]]=0;
  }
  for (i in obj) {
  result.push(i);
  }
  return result;
  }
I changed the
code:
obj[arr[i]]=0;
to
code:
obj[arr[i][1]]=0;
so it will filter based on unique item code for my array above.
The code itself seem to do the job in filtering, but how would I modify the code so it will also push the item name? So ideally the result would be:
code:
[[item1,9090],[item3,9010]]

The way the original code is de-duplicating is that it's putting all the objects into an associative array as keys, and since you can't have duplicate values in an associate array - voila, the stuff is de-duplicated.

However, it's not great for objects because you're only de-duplicating the key names.

A better way of de-duplicating is to use Array.filter... e.g.

code:
function unique(a) {
    var seen = {};
    return a.filter(function(item) {
        var k = item[1];
        return seen.hasOwnProperty(k) ? false : (seen[k] = true);
    })
}
Array.filter runs a test on every object in an array - if the test returns true, then the object is returned. This will return [[item1,9090],[item3,9010]]

Bruegels Fuckbooks fucked around with this message at 00:07 on Apr 29, 2018

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Revalis Enai posted:

Wow, the code got exactly what I was looking for and it ran fast. I'm trying to figure out how it actually does the filtering. From what I understand it's going through every item code(item[1]) to see if the object has its own property? If true, then it's set to false, and if false, it does (seen[k] =true), which I'm not sure what that does.

So the deal is array.filter will return the items in the array that pass the test, and exclude the ones that fail.

e.g.
code:
[1,2,3,4].filter( function(item) { return item > 2 } );

This returns [3,4]
The filter is applied from the beginning to the end.

So what we're really trying to do is
"exclude all elements in the array that have already been seen."

"Seen" is an object containing keys that already been used. It starts empty, but as items with keys are found, the keys will be added to the Seen object, and anything with a key that has already been added to the seen object so it won't pass the filter.

code:
        var k = item[1];
        return seen.hasOwnProperty(k) ? false : (seen[k] = true);
This means:
1. Get the key of the object (if you want to do whole object comparison, you could just JSON.stringify the object here).
2. If the object has not been 'seen', then it passes the test - but we add its key to the 'seen' object.
3. If the object has been seen, exclude it from the returned array.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

roomforthetuna posted:

I suppose that works okay if you don't mind writing wrappers around things, eg, if foo in my example had been canvasContext.bezierCurveTo. The deeper problem really is that there's no way to define a type as "an array of exactly 2 elements" or, even better, "an array that has an even number of elements that is at least 2".

You can specify an array of fixed length as a tuple in typescript.
e.g.
code:
let x: [number, number];

x = [1,2,3].... <-- fails compile because array does not have three elements that are numbers
For the second problem, you can't express modulo length as a type in typescript, but you could instead represent it as an array of tuples:

e.g.
code:
let x: [number, number];
let y: Array<[number,number]>;

y =[[1, 2], [3,4]];

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

nickmeister posted:

Totaly newbie hear self-learning until I can start a course in the fall:

So I started using brackets yesterday to try and practice what I've learned so far. But something very basic doesn't seem to work:

document.getElementById... etc.

It seems to think "document." is wrong? "ERROR: 'document' is undefined.

But when I type the same thing up in notepad it works fine... what's going on?

'document'' only exists in a browser context - like, if you press f12 on a website using your browser, and type 'document.getElementById(x)', the command will work. however, if you're doing this from like a node.js command line or something similar, that object will not exist. so how are you running the js?

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Chenghiz posted:

Put a link to site A on site B and check its style using getComputedStyle to see if it's been visited or not.

i thought you were joking but somebody actually made a library that does this. https://readwrite.com/2008/05/28/socialhistoryjs_see_which_site/

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

FSMC posted:

I think I started the loop discussion and my code when full circle.
code:
//from
for(var i=0;i<a.length... 
//to
forEach... 
//to 
for(element in a... 
//then back to
for(var i=0;i<a.length... 
Now a follow on. I need some debugging advice. I've got some js which works perfectly in chrome, opera, and ie11. But it also needs to run in ie11 with some corporate ie8 emulator mode. Now with the ie11 dev tools open it works fine, but stops working when I close the dev tools. How do I work out what is going on since as soon as I try and look at source, and element, the console, etc. it start to work. I've remove all traces of console.log but I'm not sure if there is anything else that makes debug mode work different than normal.

First, be extra sure you got rid of all the console.logs, because unless the console is open, accessing the console will throw an exception and it'll skip over all the code (you do realize that the IE11 debugger has traces and you can just log stuff to console by right clicking in the debugger and adding a tracepoint right?)

Second, did you try setting the debugger's exception catching mode to 'break on all exceptions?'

Third, you can open the debugger using the 'debugger' statement in js, so one possibility is putting that statement in the place that doesn't get called when debugger is closed, and moving that statement earlier and earlier in the program until the debugger starts opening, and then concluding that such-and-such line is the culprit. There are some exceptions that are handled differently when debugger is open/closed - an example time this bit me was in the IE10 days with 'use strict' - my js had 'use strict', and if your code has 'use strict', and there's a strict mode violation, in IE10, the code would not throw an exception or open the debugger, it would just do gently caress all, but it would work if the debugger is open! (Hmm, sounds like your problem come think of it)

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Dumb Lowtax posted:

After seeing those performance tests on the previous page I'm going to try it anyway in my inner loops and run the profiler just to be sure whatever else I typed was being inlined to that same thing basically by the compiler

At least one problem with flying by performance numbers is the McNamara fallacy, e.g.

quote:

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.
.

There's a tendency when that quantitative facts (e.g. for loops written with 'i' are x amount faster) are juxtaposed with qualitative facts (foreach and for in loops are easier to maintain) to allow the more measurable thing to win. It can feel safer to rely on quantitative facts than qualitative facts (which can often be reduced to the level of anecdote). So let me throw down some anecdotes:

a) The difference in your type of for loop is trivial in almost all for loops given data size and expected iterations.
b) In places where the type of for loop actually makes a difference, you probably have a larger design issue at work that would be better addressed than optimizing your for loop.
c) Scale matters - sure you might save 15ms by changing the loop type, but if the inner guts of the loop are taking 1 second in that time, who gives a gently caress about the iterator.

Now as a programmer, there's a natural inclination to believe the micro benchmarks, but do be aware that kind of thinking is what kept the vietnam war going for 7 years.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

huhu posted:

Add a sleep function?

(from Google)
code:
function sleep(milliseconds) {
  var start = new Date().getTime();
  for (var i = 0; i < 1e7; i++) {
    if ((new Date().getTime() - start) > milliseconds){
      break;
    }
  }
}

just to harp on this, javascript is single threaded (excepting web workers) so this will block the UI thread and any timers you have running with settimeout etc. You usually don't want to sleep like this.

the settimeout equivalent does not block the main thread and just executes a function after x amount of time - however, x is just earliest the function can fire and it's not guaranteed that it will fire at x time, and settimeout is also subject to the timer resolution of your operating system (e.g. the timer might be accurate to ~15ms instead of just 1)

also, Date().getTime() is a bad way of getting current milliseconds because at very best, it will be accurate to 1ms. performance.now() can be accurate to the microsecond, except it's not actually because browser vendors realized that returning the real time using performance.now() made machines vulnerable to fingerprinting, so the brilliant solution is that your different browser vendors will add random garbage to the timestamp returned by performance.now() to make machines less vulnerable to fingerprinting (https://developer.mozilla.org/en-US/docs/Web/API/Performance/now - to the point where I think performance.now() is less accurate than Date().getTime() on Firefox.

in conclusion the internet was a mistake.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

FSMC posted:

There are two use cases why want good html coverage and why data is in the html in the first place. The first is sets of accounts in ixbrl format stored as xhtml, so I need to test anything built over a variety of accounts created using various software. The second is I'm working on an extension that does it's magic by getting and adding data to different sites I have no control over, if I fix something on one site I want tests to make sure I haven't broken it on another site, etc.
JSDOM seems a bit limited so I'm trying to get Karma to work.

in general the point of unit tests should not be to verify the end to end functionality, the point of unit tests is verify that your code in isolation. the test you're talking about is more an end to end test or an integration test.

the problems with the end to end in this perspective are
a) End to end testing is slower than unit testing.
b) End to end testing is less reliable, as you're at the whims of the external site.
c) In your case, since you're testing stuff that's not under your control, the tests might break for reasons that aren't your fault or problem.

while you can test live dom manipulation using jasmine/karma, you're probably better off not using unit test tools for this kind of test, and using browser automation tools like selenium.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

roomforthetuna posted:

I guess this was something like
code:
setTimeout(someObj.someFunc, 1000);
Which tends to mess with the "this", because you're passing the function pointer, which doesn't come with embedded 'this'.
You can instead do this kind of thing like
code:
setTimeout(() => someObj.someFunc(), 1000);
This way setTimeout's function pointer is to the lambda that calls someObj.someFunc - when you call a member function via its owner that implicitly includes giving it the 'this' parameter, but if you use it as a variable it's just a function pointer and doesn't have any bound parameters.

i would generally prefer
code:
someObj.someFunc.bind(this)
over the arrow function in this case, it's a little terser and makes it more obvious that the scope is important

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Joda posted:

That's true. It adds another dimension to a keyword that already has too many meanings. Tbh I never actually used it until ES6 and TS when it started having the same function as in any other OO language.

the arrow function changing the scope is a super-useful feature, but the issue is that arrow does so much, and it is relatively difficult to google. If Jimmy hasn't looked up what it does, it's entirely possible he will miss the scope implication of using the arrow function, and that'll take him down a rabbit hole for a few hours.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

akadajet posted:

The guys I pass on when interviewing? Yes.

dude if you've never had a back-end developer who doesn't give a gently caress about front end / indian contractor / new kid out of school ever have to do a minor update to a UI... like, it's probably the least damaging thing that they could possibly be working on.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Joda posted:

Don't you lose significant performance potentially? Also I assume there is no way to make wasm compatible with an ES5 platform

i haven't run into a case where compiling back to es5 hurt performance short of "oh, there are more characters so it'll increase the load time of the page."

emscripten can compile to either asm.js or webassembly, so you can provide both a web assembly version and an asm.js. asm.js can run in IE11 or Safari. the load times are a lot better for webassembly and IE11/Safari didn't run the asm.js stuff nearly as well as chrome did, but it might be possible to use it for simple use cases.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Joda posted:

I assume the entire point of using wasm proper (near native performance and threading) is lost if you compile to asm.js?

E: although I guess it still counts as technical compatibility

it depends on your use case. even the asm.js demos can be pretty impressive from a performance perspective. the big issues with asm.js are:

a) There isn't a great way of telling how much memory you're using with asm.js
b) Initial load times

If we want to compare web assembly vs. asm.js performance.
https://blogs.unity3d.com/2018/09/17/webassembly-load-times-and-performance/

Webassembly has much better load times. The asm.js computational performance was already pretty good. Webassembly is an improvement but asm.js was already pretty good in that regard.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Suspicious Dish posted:

I write TypeScript because I find it worth the hassle, but 99% of these stupid bundler or compiler things are the most fragile pieces of software I've ever used, and manage not only to screw over your computer, but often browser devtools as well.

I'm currently using Parcel. It's the least bad of these stupid bundler things that I've used and it's still basically 100% busted.

I don't understand why anybody would willingly use Babel, for instance.

Babel I've used because I needed to get angular to work in IE11. I imagine there are other ways. I did get my most recent project to the point where source maps were working and was able to debug in typescript yet have it serve bundled files last time I made a project - however, I think if you have a project where you bundle/minify and you're not getting working source maps for whatever reason, either ditch the bundler if it doesn't support source maps at all, or fix the configuration.

I've seen people cling to using bundlers that don't emit source maps (like the one build into asp.net mvc, although apparently people have done projects to make it support it if you want to trust those?) and you can get 95% of the performance improvement just by enabling gzip compression for static files on the web server without having it gently caress your debugging.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Suspicious Dish posted:

no, I have source maps enabled. but source maps are extremely fragile, and they don't seem to work for breakpoint debugging (local variable names are still mangled, sometimes chrome just shows me the unmapped .js file anyway). error stacks as well seem to be completely untranslated through source maps. and how do i import modules through the JS console REPL?

it's weird because we solved debuginfo years ago with DWARF but somehow the web guys decided to throw all of it out the reinvent it with some quirky JSON garbage, and they can't even do that right

i've definitely gotten breakpoint debugging to work with source maps before, it's not an intrinsically impossible thing.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

porksmash posted:

Typescript question - if I have a type defined as a list of specific strings, is there a way to create an interface that has optional keys for each possible value in that type? Example

code:
type possibleThings = 'all' | 'some' | 'none';

interface shapeIWant {
  all?: any;
  some?: any;
  none?: any;
}
but I don't want to manually keep these two in sync.

how about :
code:
type possibleThings = 'all' | 'some' | 'none';

interface shapeIWant {
  all?: possibleThings;
  some?: possibleThings;
  none?: possibleThings;
}

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

quote:

For some reason, this simply doesn't work. If you replace the "success" part with something like alert("success"), this works just fine. I, for the life of me, have no idea why Javascript doesn't like the version with the getElementById. Anyone know why?

Debugger says: Invalid left hand sign in assignment.

I think it's doing:
code:
(document.getElementsByClassName("red").length || document.getElementById("success").style.display) = "block"
If you don't provide the parenthesis

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

smackfu posted:

Interesting, I figured they were using stringify to validate the user-provided JSON but couldn’t figure out why they were then parsing and using the result.

i've definitely made a utility method "copy" and stuck that JSON stringify/JSON parse in there. javascript is always pass by value, but if a variable return to an object, the "value" is a reference to the object - this means if you do something like slicing an array, and modify a property of the element in the sliced array, the element in the original array will also be modified, and you can stop that from happening by doing this.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Nolgthorn posted:

code:
// Code to be used if you're worried that the developers of NPM might start using your code

function clone (input, objects = []) {
    if (input instanceof Object) {
        if (objects.includes(input)) return '[circular reference]';
        objects.push(input);
    } else {
        return input;
    }

    if (Array.isArray(input)) return input.map(item => clone(item, objects));

    const output = {};

    for (const key of Object.keys(input)) {
        output[key] = clone(input[key], objects);
    }

    return output;
}
code:
// Or if you prefer closures

function clone (raw) {
    const objects = [];

    function performClone (input) {
        if (input instanceof Object) {
            if (objects.includes(input)) return '[circular reference]';
            objects.push(input);
        } else {
            return input;
        }

        if (Array.isArray(input)) return input.map(performClone);

        const output = {};

        for (const key of Object.keys(input)) {
            output[key] = performClone(input[key]);
        }

        return output;
    }

    return performClone(raw);
}
None of this code is tested I just typed it.

so why are you doing this rather than json.stringify/json parse again?

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Obfuscation posted:

I've been using [...Array(x)] as a hacky way to initialize empty arrays but this doesn't work in typescript. Is there a better way?

edit nm

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Dumb Lowtax posted:

Now the Chrome Developer Tools window has logpoints so you don't need Console.log at all.... as infrequently as you should have already been using it due to breakpoints and hovering over variable names for contents normally being the more appropriate way to monitor a running program

man it seemed dumb to just set a conditional breakpoint and put console.log() in the statement like i've been doing to get the equivalent effect, but logpoints is apparently a new feature in 73, so I guess I can look forward to that soon.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Suspicious Dish posted:

breakpoints don't work correctly half the time for me. thanks source maps

before typescript and this new node/webpack ecosystem, i didn't seem many projects that were more than, say, 200k ~ 300k of javascript - people would be like "gently caress this, all this code shouldn't be on the front end god have mercy let's not write any more javascript."

nowadays i'll see ts projects run half a million ~ million+ loc not including jquery, react / whatever, and these take minutes to build. on the one hand it's cool that these tools make it practical to produce a working project with this much javascript, on the other hand, it's a crime against nature. if i had to build every time i did console.log i'd go insane.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

duz posted:

It breaks IE. What more of a benefit do you need?

For a real answer, it makes templating easier.

typescript will let you use template literals with ie if you're so inclined.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.
code:
let arr = [];
let min = Math.min.apply(null, arr.filter((value) => value > 0));
What's the value of min?

infinity


The reason?

Math.min() = Infinity,
(whereas Math.Min(undefined) = NaN)
so Math.min.apply(null, []) becomes to Math.min() which equals infinity.

Adbot
ADBOT LOVES YOU

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Dumb Lowtax posted:

That too lol

I've had the same realization about parts of engines like JQuery so many times that I don't feel like I'm missing much at all by being wholly adverse to importing outside code

Yeah, the big problem when evaluating libraries is knowing what to reuse and what you should write yourself. There's always a cost when using a third party library - it can be difficult to assess how well you understand the problem the library is trying to solve, dependency management can be a huge bitch, and it can be difficult to assess how well tested the library is or whether it's handling cases that are pertinent to your problem domain.

I would say new developers or developers fresh out of school are generally going to use every library under the sun and then get burned when a problem comes up that's difficult to explain because it in no way relates to your problem domain, but is just a consequence of including every library under the sun. This jades people and turns them into "gently caress it, write it myself" types - these types get burned when they try to fix issues or make their own (crypto, deserialization/serialization, authentication, XSS, input sanitization etc.) Judgement eventually gets better through experience.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply