Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jabor
Jul 16, 2010

#1 Loser at SpaceChem

caiman posted:

Over the past few months I've begun using some new tools (new to me anyway) that are changing the way I think about my development process. SASS, for instance, was the first tool that made me begin to think about the concept of separating "development" directories from "production" directories. Now I'm looking into Grunt, which does even more to force a separation between development files and production files. These tools are opening my eyes to best practices too; before, I never did any minification or concatenation of my js, now I'm trying to get into that habit.

I'm struggling to think of how to phrase my question. What would a good development project structure look like, and what are the best practices for deploying it to production, I guess? Or are there even any guidelines? Maybe it's just a whatever works best for me thing?

EDIT: clarity.

Just a few things off the top of my head:

- Everything in version control. Not having your source code under version control is completely inexcusable, but you'd be surprised how many places don't do it.
- I mean everything under version control. This includes your dependencies, and any tools you use to compile your production site from your source code. If something breaks, and you need to rollback to last week's build of your site, you should rolling back to exactly the same setup you had when you built it last week. If you can't put your dependencies straight into version control for some reason (I haven't heard a good reason to not do this, but some people still think it's a bad idea somehow :shrug:) you can use a package manager to configure your dependencies instead and just have the configuration checked in.
- A tool you can use to deploy your site to a particular server in the same way every time. If your deployment process involves manually copying files, or multiple steps that must be performed in order, or really anything beyond "run script -> tell it what version to deploy and what server to deploy to", you should be automating more of it.

There are other things you probably should be doing, but I consider those three to be basically the minimum in terms of having a functional development environment.

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Have you tried logging the failing requests to see if there's any pattern in the user agents or anything else?

Are you sure they're not actually OPTIONS requests being used to preflight things?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

hayden. posted:

Font Awesome seems to mess up sometimes on my site:



but it's weird because it acts like it completely does not have the font with a few exceptions (the bug, cog, shield, etc) and if I just move my cursor over the error square, the font pops back normal. Any ideas?

Does your :hover styling change the font characteristics at all? It's possible that the font doesn't have a particular character with the default font styling for your site, but does have it for the hover styling.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Shinku ABOOKEN posted:

This is the problem. Only because Firefox lies and 100% zoom is larger than 100%.
I guess I'll remove the vertical borders all together.

I don't think that's the problem. The issue here is that your table sizes aren't an exact number of pixels - there's no good way to render a 1px border that's in the middle of two pixels, you just get a choice of a number of bad ways. Firefox chooses to have the border occupy both pixels in that case.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

revmoo posted:

Anybody have Firefox poo poo the bed on SSL? I have my FF set to clear all cookies, cache, everything on shutdown, and yet if I load up a fresh instance of FF and browse to a number of sites I get untrusted connection error. I've had other people verify with their copies of FF and it only happens with my copy.

Do you have malware running a man-in-the-middle attack on your connection?

Try comparing the certificate that Firefox is complaining about with the real certificate that other people are seeing.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

nexus6 posted:

Has anyone had issues using Google's No Captcha Recaptcha? I added it to one of my forms yesterday and tested it: you can't submit the form without completing it, you can't submit the form if you fill it in wrong and you can't submit the form if you have Javascript off. However I've had over 2,000 spam submission in the past 24 hrs.

Is it broken or have I missed a step? I've followed all the instructions on thier developer page.

You are actually checking the recaptcha token on the server to verify that it's been filled out, right? The client-side checks are just there for convenience, they don't provide any actual security.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

me your dad posted:

What's really strange is that it's only affecting the two middle elements. And at work, only the second element appears like that.

Probably because those pages are the ones you've already visited on that machine.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
The perfect-world Right Answer for an API is a self-signed CA cert that you pin in the app and use to sign your own certificates for the API.

The point of the CA system is to verify that the owner of a cert legitimately represents the company/domain name/whatever that the cert specifies. But if you're writing an app targeting a particular API, you already know who the right owner is - you should really be asking them if a particular certificate is accurate, not some CA that basically just adds another attack vector.

Of course, this falls apart a bit when you're writing an API for random third-party developers to use, since half of them are likely to just turn off certificate validation entirely if it doesn't Just Work. It's probably a good idea to get your intermediate CA cert signed by someone just to make it a bit more idiot-proof.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Honestly I don't think your plan of "store keying material on developer's workstations, but not under any form of version control" is a particularly good one, both from a security and from a business continuity standpoint.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
You're basically hosed if your office has a fire, major flood, or even just hardware failure that wrecks your developer machine to the point where you can't pull your key material from it. In addition to that, you have no ability to audit accesses to that material, so an attacker that gains access (or even just a rogue employee) can easily exfiltrate it without tripping any red flags.

A better solution is to use hardware-backed security so that you don't have raw keying material anywhere that a human could possibly read it - for business continuity and standing up new servers you have copies in (geographically distributed) physical safes with strictly controlled and logged physical access.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Worked for me, once I turned off some browser plugins that were blocking it.

Have you tried it in a clean browser profile?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
It's 2015, you can absolutely give each page a unique URL and update the location the browser displays without actually reloading the page.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
The whole point of GWT is using Java instead of JS to do your webdev. http://www.gwtproject.org/overview.html

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

ModeSix posted:

It's by the same guy that made TSC, he's just depreciated TSC in favour of this new thing.

Say how long is he going to support this one before deprecating it in favour of something else?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
All of those "clever tricks" you try to fool spambots with are basically just an easy way to make your site completely unusable by anyone that relies on a screen reader or other assistive technology in order to use their computer. Please don't do that.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

kedo posted:

I've been working on a Macbook Pro for pretty much my entire career and am in need of a new computer, but I'm not willing to shell out several thousand dollars for a computer that maxes out at 16 GB of RAM. I use that much now, even when I'm careful about quitting programs I'm not using. I could go the desktop route but I work half the time in an office, half the time from home. I don't want to have to switch machines every other day.

Leave a desktop in the office, remote into it when working from home. You're literally in the same city so latency shouldn't be an issue.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Heskie posted:

Semantically it is fine, but I think it still matters in terms of SEO which is a shame.

What gives you this impression?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

melon cat posted:

I see what you're saying. I've checked the YouTube iFrame API guide (I've read it from start to finish about 4 times, now) but it doesn't include any parameters relating to my modal window issue. And all solutions I've found online apply to people having modal window issues with their Bootstrap sites (mine isn't a Bootstrap). I hate to throw in the towel, but I'll probably just end up hyperlinking an image to the actual video source. Which is a drat shame, because it's going to pull visitors away from my site each time they click a link.

Really? You can't see anything on that page that would allow you to pause the currently playing Video?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

darthbob88 posted:

Accessibility: Is there a way to announce when visibility changes on a page? aria-live works when the actual content changes, but it's less helpful when I just toggle visibility on the same set of elements, without actually changing anything.

Can you stick the live region on whatever contains the stuff you're toggling?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Having the same name as a totally different product in the same space seems like an awful idea for a whole lot of reasons.

Like, what do you think's going to happen the first time someone else on your company does a search on the name of this tool they've been told to use, and they find a whole lot of public documentation describing a tool that solves the problem they have?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

CarForumPoster posted:

Why have another domain to store images/content? Like SiteX.com loads its images, videos, etc. from SiteXcdn.com or something like that.

If you're running a complicated website, your application servers are kind of expensive, with lots of cpu and memory and stuff. Static content like images and videos doesn't need any of that - it's just pulling bytes off a disk and shoving them down the pipe. So it's often a good idea to farm that work off to a bunch of cheaper servers that exist just to service those requests for static content.

You could do this in your load balancer, where you direct requests for static content to a separate set of servers. But then your load balancer still has to handle all that traffic, which can be pretty substantial. By serving your static content from a different domain, you can use DNS to have users connect directly to your static content servers to get that static content, taking that traffic away from your application server stack entirely. You can even farm it out to a CDN that focuses entirely on serving that sort of content at minimal cost, has replicas close to where the user is to minimize latency, etc.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
If you just assume that all users should receive the privacy benefits of the law, you don't have this problem.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
People that are still using IE6 are probably the same people that aren't ever going to be comfortable buying something online, and would rather just call you over the telephone and sort it out that way. There's no benefit, especially when you realise that spending engineering time on it means spending less time on features for the people who actually use your site.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Lumpy posted:

Got a link for the research on that? What if your target market is less developed countries where a decent percentage of people use IE6? Should you tell them to gently caress off? Is making sure those people can't use your site a good business move?

Even in less developed countries, IE6 usage is incredibly low. If you're in that market, the long tail you should be targeting is older versions of mobile webkit.

If you're working on IE6 support instead of making sure your site is usable on a tiny phone screen, you're making a huge mistake.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Nolgthorn posted:

I really never got around to making letsencrypt work either. As the certs expire seemingly daily and therefore you have to automate the certification process so that it can run all the drat time.

This is by design. You put in the effort once and then it just works forever.

The alternative is someone does it manually, and has then forgotten about it by the time the cert expires, then everything breaks and they decide to just go back to insecure http.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
If it's an api that you're supposed to use from your backend instead of directly calling from the user's browser, then they're not going to make it callable from browsers.

You'll need to structure your app so that your server makes the calls, and exposes its own api to your React app with the results.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

CarForumPoster posted:

Literally every week I see tiny hotels which are part of a chain like Best Value Inns, owned by some random dude in a small town get sued because the corporation's booking website "isn't ADA compliant." (Which, as we've established, is basically impossible to verify.) The compliance issue at the heart of the lawsuit? The dude would have to pick up the phone to verify that they have ADA accessible rooms. They do have them. They just cant book that specific room through the corporate website. Keep in mind the person they're suing HAS ZERO loving CONTROL OF THE WEBSITE THEY'RE BEING SUED ABOUT. IT IS PHYSICALLY IMPOSSIBLE FOR THEM TO REMEDY IT BECAUSE ITS MAINTAINED BY THEIR PARENT CORP. THE ATTORNEY SUES EACH LOCATION RATHER THAN THE PARENT CORP.

And then the franchise forwards the suit to the parent corporation that operates the website, and the lawyers for the parent corporation send their form letter replies and work it through to a settlement. What would you prefer happen? That companies get to entirely dodge having to obey the law by saying that their website is operated by someone else?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

CarForumPoster posted:

This will be my last post on the derail. It is impossible to comply with ADA website laws enough to avoid a lawsuit. Here is an ADA case about website reservations against a hotel. The hotel DOES NOT HAVE A WEBSITE.

Something you don't seem to have realised about the American court system: anybody can sue anybody for anything! At all! I could sue you for using bold all-caps in this post!

The fact that someone has filed a suit does not mean that their claims have merit, and it does not mean that it won't be immediately dismissed as soon as the judge gets a chance to rule on a summary judgement motion.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Honestly the biggest problem with IE was the update model.

Nobody cares if your site doesn't support a two-year-old version of Firefox or Chrome - because nobody is using a two-year-old version of those browsers.

Lotsa people using old-rear end versions of IE though.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

My Rhythmic Crotch posted:

Also the current system seems to rely on good old fashioned trust to an alarming extent. Like google will flag your messages as spam if you try to setup your own email server, forcing you to use a service like mailgun. That's especially annoying because who wants to pay just to be able to send email? Seems kinda broken.

If you don't care enough about your email to be willing to pay to send it, odds are way higher than normal that your email is spam.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
If this is just something for personal use you could use the cut-and-paste flow (where after the user signs in and approves the request, they manually copy the oauth token into your app).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
So the Google developer docs lay out your options pretty nicely: https://developers.google.com/identity/protocols/OAuth2InstalledApp

Essentially you have two choices:
- Run a web server on localhost that you can use to catch the oauth response.
- Install a handler for a custom URI scheme and use that scheme to catch the oauth response.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Use webasm to run a virtual machine with ie9/windows xp running in kiosk mode.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Youtube bots that are boosting a particular channel also subscribe to a whole bunch of other channels to try and blend in with legitimate users.

Hence, the approximation is more consistent over time, rather than fluctuating as bots subscribe and are then banned.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

icecreamcheesesteak posted:

If you don't want a jwt to be used after a user logs out, then a white or blacklist would need to be implemented somehow, so a db would need to be hit for each request.

Note that in your proposed example of using sessions + cookies, the cookie is a key used to look up session state in a database, so you're not exactly avoiding this step.

The advantage of using jwts comes into play mostly when your site is large enough to need more than one frontend server - you don't need to have any shared signin state between your different frontends, since any frontend can decode the information from the jwt.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

icecreamcheesesteak posted:

I've seen a couple of articles mention that a benefit of using jwts is better performance due to not needing to grab data from a db, but also later mention that it might be a good idea to have a blacklist for tokens. I know I'd still need to look up session data if using cookies, I just though that if someone wanted to use jwts and have a way to prevent an unexpired token from being used then a session store would be needed, so using jwts wouldn't actually be beneficial in that regard.

This is where the split between access tokens and refresh tokens is useful - if your access token is short-lived, then you don't really need to keep a denylist, instead you only need one for refresh tokens that you check when the refresh token is used. Or perhaps you have a tiny, cheap-to-query system that contains the denylist for access tokens, but only for tokens that haven't yet naturally expired. This can be much faster than a session state database because eventual consistency can be fine here.

quote:

I'm unfamiliar with a site having multiple frontend servers. Is a frontend server in this case a server that responds with html pages (as opposed to serving javascript for SPAs), and is the jwt being decoded server side?

In this context "frontend server" is any application server that directly handles user requests - as distinct from a "backend server" that responds to requests from your other servers (and thus only indirectly handles user requests).

In a large web application, you'll frequently have multiple servers for any given thing, both to handle more load than a single computer can support, and to provide continuous availability even if one of your servers goes down. (For example, if some idiot with a post holer decides they don't need to do a locate before digging straight through a fiber line). Often these servers are widely geographically-distributed, but user requests could still hit any one of them depending on the vagaries of load-balancing, so using session state would require having some state somewhere that can be written to by any frontend yet have that same state be synchronously available from any other frontend. Or you do session-aware load balancing, which comes with its own can of worms.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
What was the TTL on the previous records? Could be one of your ISP's resolvers has cached the previous record, so it'll keep flip-flopping until that old record ages out of everyone's caches.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Don't set up your database to be publicly accessible!

Set it up so that the only thing that can talk to it is your web server. Your web server queries the database to determine what it should serve to specific users.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
You could work around this by using ACAO * and manually validating the origin on the backend (if required), right?

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

nexus6 posted:

Having MAMP installed is usually enough in like 95% of cases

Which versions?

If you want to upgrade any of those versions, do you make sure every developer is doing so in sync? And also in sync with what you're running in production? How?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply