|
Mogomra posted:The horror is that this third party requires that all the objects I create in their system have a unique name. It's not for security or anything, so it sounds like a job for uniqid(), right? Nope. There's always mcrypt_create_iv (using MCRYPT_DEV_URANDOM) or openssl_random_pseudo_bytes which should use /dev/urandom and by cryptographically secure (guarenteed different unpredictable random value). Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated. At least whoever wrote the documentation for uniqid knows how to turn a negative into a positive: PHP Uniqid Docs posted:Can be useful, for instance, if you generate identifiers simultaneously on several hosts that might happen to generate the identifier at the same microsecond.
|
# ? Jul 9, 2014 01:00 |
|
|
# ? May 28, 2024 17:10 |
|
v1nce posted:Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated. This is actually /dev/random and there's essentially no reason to ever use /dev/random for anything for exactly that reason.
|
# ? Jul 9, 2014 01:03 |
|
v1nce posted:Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated. urandom is the one that never blocks. random is the one that arbitrarily blocks when you use it heavily for not-very-good reasons.
|
# ? Jul 9, 2014 01:04 |
|
Westie posted:It's not a GUID generator, that's for sure. It returns a hex representation of server time, which, after compiling it, pauses for a microsecond or some poo poo like that so each call within that box returns a different result. That seems like what happens when you pass true as the second parameter, but I just had it in a loop with no parameters (until I read those docs). It was definitely returning the same thing for multiple iterations. I dunno. I just think the name is a real misnomer.
|
# ? Jul 9, 2014 01:49 |
|
Westie posted:It's not a GUID generator, that's for sure. It returns a hex representation of server time, which, after compiling it, pauses for a microsecond or some poo poo like that so each call within that box returns a different result. Oh no, it's worse than that, and more and more broken the further back in PHP release history you get. uniqid is an active security threat and it should never be exposed to users due to the information it leaks in plain sight. For generating random crap with PHP, also consider ircmaxell's RandomLib, which uses some RFC-defined ways to mix together entropy sources in ways to make weaker ones suck slightly less. ircmaxell's the guy behind the recent hashing and password generation sanity built in to newer PHP versions. Keep in mind that mcrypt_create_iv can be absurdly slow, and if you use openssl_random_pseudo_bytes instead, pay attention to the by-ref second parameter... McGlockenshire fucked around with this message at 08:11 on Jul 9, 2014 |
# ? Jul 9, 2014 08:09 |
|
McGlockenshire posted:Oh no, it's worse than that, and more and more broken the further back in PHP release history you get. uniqid is an active security threat and it should never be exposed to users due to the information it leaks in plain sight. Well, it did pretty much say what I said! Didn't know about php_combined_lcg though, thanks for pointing that out. If PHP docs say something is insecure, then of course, by the grand scale of things it's a potential fuckup worthy of YOSPOS. Me and a fellow goon/colleague discovered something similar w/ rand... but then it's quite easy to find. The thing I love about the rand functionality is that it generates up to 32,768 numbers per box. Now, imagine the hilarity that ensues when you're trying to import over 60,000 products and the product ID is generated using rand()...
|
# ? Jul 9, 2014 09:23 |
|
Westie posted:The thing I love about the rand functionality is that it generates up to 32,768 numbers per box. ..why? Just why. i mean I'm sure it's just "because PHP" but why is it only positive, signed 16-bit integers? Westie posted:product ID is generated using rand()... ...also why? I mean there's times where sequential ids might be a security fuckup, but it seems like products in a catalog isn't one of them?
|
# ? Jul 9, 2014 09:41 |
|
ultramiraculous posted:..why? Just why. i mean I'm sure it's just "because PHP" but why is it only positive, signed 16-bit integers? It's a thin wrapper around the C rand() function, and rand()'s requirements were specified in 1989.
|
# ? Jul 9, 2014 14:20 |
|
gh... disregard me.. misread
|
# ? Jul 9, 2014 16:32 |
|
I'm sorry but the pedidos/peidos order/fart portugese typo needs more attention. Just imagine that one going through: Welcome to our Portugese ecommerce portal. Thank you for your fart. Here is your fart summary... Your fart has shipped! Problem with you fart? Do you need to change your fart?
|
# ? Jul 9, 2014 19:07 |
|
Fartmeter. Fartphile.
|
# ? Jul 9, 2014 19:17 |
|
Which one is more of a horror?
The Boss posted:when you are on production, while we try very hard not to make changes there, sometimes there is a need, so it is best to use github for windows and do a sync so it commits changes Because I stupidly assumed he wasn't developing in production without telling any of the other devs.
|
# ? Jul 9, 2014 21:59 |
|
Mogomra posted:Which one is more of a horror? The horror is that you are able to edit prod directly instead of pushing deployments.
|
# ? Jul 9, 2014 22:22 |
|
Always do a 'git status' unless you're absolutely certain of what it'll be.
|
# ? Jul 9, 2014 22:30 |
|
ultramiraculous posted:...also why? I mean there's times where sequential ids might be a security fuckup, but it seems like products in a catalog isn't one of them? This is one of those design decisions that were in the system way before I joined, and I'm glad to say that there is some progress being done to make the IDs sequential or something like that.
|
# ? Jul 9, 2014 22:46 |
|
Volmarias posted:The horror is that you are able to edit prod directly instead of pushing deployments. Seriously. There's almost never a reason for a developer to be poking around in a live production server, and there's absolutely never a legitimate reason for a change to be on a production box but not in source control.
|
# ? Jul 9, 2014 22:46 |
|
Spatial posted:It's for an embedded radio transceiver that runs off a little battery. Normally there's a host CPU which controls what it does, but having user code running on the transceiver means you can power the big ol' CPU down and allow the transceiver to run autonomously, giving you longer battery life. use forth
|
# ? Jul 9, 2014 23:04 |
|
You might be dealing with a high-latency/low bandwidth link where you need to tweak a configuration value to get you or your client back in the market *right now*. Cutting a new version and deploying out takes time. That's an appropriate reason. The real horror is not immediately then merging the change back into source control so that your changes are re-releasable. And it's perfectly reasonable for developers to have access to and indeed troubleshoot and fix problems in production when you're geared towards minimising downtime.
|
# ? Jul 9, 2014 23:04 |
|
Malcolm XML posted:use forth I was trying to resist the urge to make that suggestion.
|
# ? Jul 9, 2014 23:21 |
|
Milotic posted:And it's perfectly reasonable for developers to have access to and indeed troubleshoot and fix problems in production when you're geared towards minimising downtime. Maybe at the start, but once you grow up as a company it's a terrible idea. Provide tools that allow you to see what's happening. Things like Log Stash, New Relic, and Librato can provide a ton of insight with no direct access required. And once you're dealing with a cluster of servers, not just one box, pushing out a new deploy is certainly better than changing poo poo on one box at a time.
|
# ? Jul 10, 2014 00:30 |
|
If pushing out a new version through your normal deployment process takes so long that you have to skip it sometimes, then you should fix your deployment process.
|
# ? Jul 10, 2014 00:33 |
|
Plorkyeran posted:If pushing out a new version through your normal deployment process takes so long that you have to skip it sometimes, then you should fix your deployment process. OK, I'll get right on that. It takes more than an hour to push a deploy at work, which we do twice a day. (Used to take a lot longer, before we replaced the C++ code generation with a JIT.) If we're seeing a failure only on certain hosts, it can make a lot of sense to connect directly to one of them, observe, change a configuration parameter, selectively kill processes, attach a debugger, maybe selectively apply an instrumentation patch or test a fix in place. If you have to do a full deploy you risk losing the state that will help you find the actual transient problem. (And you don't risk mistakenly deploying your scratch change to the whole fleet.) When you're done the machine gets reprovisioned back into the normal fleet configuration of course, and you don't do it in order to change what is "deployed in production", but rather to use production state to analyze a problem. Still, absolutist "you should never have anything on a prod host that wasn't fully deployed" is counterproductive.
|
# ? Jul 10, 2014 01:30 |
|
Volmarias posted:The horror is that you are able to edit prod directly instead of pushing deployments. Oh my god, you obviously don't understand the intricacies of Lean Startups or Minimal Viable Product. - My Boss
|
# ? Jul 10, 2014 01:47 |
|
Mogomra posted:Oh my god, you obviously don't understand the intricacies of Lean Startups or Minimal Viable Product. - My Boss What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model?
|
# ? Jul 10, 2014 02:00 |
|
I'm pretty sure I explained what happens pretty well. We both RDP, yes RDP, into THE loving production server. Hilarity ensues and I get yelled at because I'm bottom on the ladder.
|
# ? Jul 10, 2014 02:05 |
|
I wish we had a deployment process.
|
# ? Jul 10, 2014 02:11 |
|
I really enjoyed when an intern looked at me like I was crazy when I told him we don't deploy things to production the same way one does to heroku.
|
# ? Jul 10, 2014 03:46 |
|
Subjunctive posted:What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model? sounds like you're talking about a deployment error and not an active error? Taking a failing host out of your capacity and analyzing it is perfectly acceptable. Attaching a debugger to a live host that is interfacing w/ production services that vend confidential data is not*. fyi i find that idea that there are two deployments for a single day appalling, as well as what sounds like a single repository? * this is some perfect world stuff because no company that is big enough to care about this started out this way and there will almost always be mechanisms for doing this very bad thing. FamDav fucked around with this message at 04:27 on Jul 10, 2014 |
# ? Jul 10, 2014 04:24 |
|
FamDav posted:Taking a failing host out of your capacity and analyzing it is perfectly acceptable. Attaching a debugger to a live host that is interfacing w/ production services that vend confidential data is not.
|
# ? Jul 10, 2014 04:30 |
|
Subjunctive posted:What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model? Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do.
|
# ? Jul 10, 2014 04:46 |
|
Plorkyeran posted:Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do. Sure, you can push to a tier, and we do, but that doesn't get you the data you need in many cases, because deploying means restarting and losing JIT state and cross-data center flow control and oh you were live locked with another service that's cool it'll happen again at some point. And the tiers are dynamic, so if the problem is currently manifested on host 99754 then you need to get to that host in its current configuration, not whatever set of 100 machines get allocated in for your test tier push. That erroring host is allocated, which is why it's getting traffic and triggering the thing you want to investigate. We have pretty awesome instrumentation and dynamic configuration control, but that just means that the things which require host interaction are weirder (like JIT bugs), not that they don't happen. You don't log into the hosts for the things you can instrument your way out of, you log in for the cases where the data you see doesn't make sense and you need to learn more about wtf is actually going on. Pushing a new version of code is a pretty crude way to inspect the state of a system that has entered a failure mode. Mostly these problems don't occur until many ops have been serviced, which is how they make it into production in the first place. Edit: I do not feel the slightest bit bad about multiple daily minor pushes, nor about the weekly major push, except that I'd love if we could do it more often without increasing other risk/cost (and we do roll hotfixes, but they're disruptive in a bunch of ways). Latency from "ready" to "deployed" is to be minimized, it benefits nobody. Subjunctive fucked around with this message at 06:07 on Jul 10, 2014 |
# ? Jul 10, 2014 06:01 |
|
Ithaqua posted:Seriously. There's almost never a reason for a developer to be poking around in a live production server, and there's absolutely never a legitimate reason for a change to be on a production box but not in source control. I worked for a payroll company and was on the Tiger Team, our job was to tweak stuff directly on Production to get it running again then work out what was causing it to be wrong and passing that to the dev team as a defect report. I was one of 6 people with access to production the 4 of the tiger team and the 2 DBAs
|
# ? Jul 10, 2014 07:38 |
|
Tiger Team
|
# ? Jul 10, 2014 08:13 |
|
coffeetable posted:Tiger Team Yeah they had heard about computer teams called Tiger Teams in computing and called us that erroneously. Where the real meaning of Tiger Team was a team of people who's job was to hack into the system and show where vunerabilities were
|
# ? Jul 10, 2014 08:17 |
|
I sure love the code an (ex)coworker of mine produced. As you can see, the extensive documentation is quite helpful as well.
|
# ? Jul 10, 2014 09:16 |
M_48_56_37_67_14 is a great class name and I will take the wisdom of this naming scheme to my own development efforts.
|
|
# ? Jul 10, 2014 09:19 |
|
Plorkyeran posted:Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do. It's my company, we're the coding horror.
|
# ? Jul 10, 2014 11:49 |
|
SirViver posted:I sure love the code an (ex)coworker of mine produced. That's ok, it's... it's auto generated isn't... oh god those aren't auto generated
|
# ? Jul 10, 2014 12:00 |
|
nielsm posted:M_48_56_37_67_14 is a great class name and I will take the wisdom of this naming scheme to my own development efforts. M_4_8_15_16_23_42 seems to have all these weird unpredictable bugs.
|
# ? Jul 10, 2014 13:00 |
|
|
# ? May 28, 2024 17:10 |
|
Please get rid of the line #3206, we cannot afford to have code outside standards. It was a hell of a week.
|
# ? Jul 10, 2014 13:19 |