Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
v1nce
Sep 19, 2004

Plant your brassicas in may and cover them in mulch.

Mogomra posted:

The horror is that this third party requires that all the objects I create in their system have a unique name. It's not for security or anything, so it sounds like a job for uniqid(), right? Nope.


If you call it more than once too quickly, it will return the same "unique id." :suicide:

There's always mcrypt_create_iv (using MCRYPT_DEV_URANDOM) or openssl_random_pseudo_bytes which should use /dev/urandom and by cryptographically secure (guarenteed different unpredictable random value).

Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated.

At least whoever wrote the documentation for uniqid knows how to turn a negative into a positive:

PHP Uniqid Docs posted:

Can be useful, for instance, if you generate identifiers simultaneously on several hosts that might happen to generate the identifier at the same microsecond.

Adbot
ADBOT LOVES YOU

Dessert Rose
May 17, 2004

awoken in control of a lucid deep dream...

v1nce posted:

Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated.

This is actually /dev/random and there's essentially no reason to ever use /dev/random for anything for exactly that reason.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

v1nce posted:

Note that heavy calls to /dev/urandom will expire the cache of available random values, and can result in a pause while more are generated.

urandom is the one that never blocks. random is the one that arbitrarily blocks when you use it heavily for not-very-good reasons.

Mogomra
Nov 5, 2005

simply having a wonderful time

Westie posted:

It's not a GUID generator, that's for sure. It returns a hex representation of server time, which, after compiling it, pauses for a microsecond or some poo poo like that so each call within that box returns a different result.

That seems like what happens when you pass true as the second parameter, but I just had it in a loop with no parameters (until I read those docs). It was definitely returning the same thing for multiple iterations. I dunno.

I just think the name is a real misnomer. :shrug:

McGlockenshire
Dec 16, 2005

GOLLOCKS!

Westie posted:

It's not a GUID generator, that's for sure. It returns a hex representation of server time, which, after compiling it, pauses for a microsecond or some poo poo like that so each call within that box returns a different result.

Oh no, it's worse than that, and more and more broken the further back in PHP release history you get. uniqid is an active security threat and it should never be exposed to users due to the information it leaks in plain sight.

For generating random crap with PHP, also consider ircmaxell's RandomLib, which uses some RFC-defined ways to mix together entropy sources in ways to make weaker ones suck slightly less. ircmaxell's the guy behind the recent hashing and password generation sanity built in to newer PHP versions.

Keep in mind that mcrypt_create_iv can be absurdly slow, and if you use openssl_random_pseudo_bytes instead, pay attention to the by-ref second parameter...

McGlockenshire fucked around with this message at 08:11 on Jul 9, 2014

Westie
May 30, 2013



Baboon Simulator

McGlockenshire posted:

Oh no, it's worse than that, and more and more broken the further back in PHP release history you get. uniqid is an active security threat and it should never be exposed to users due to the information it leaks in plain sight.

Well, it did pretty much say what I said!

Didn't know about php_combined_lcg though, thanks for pointing that out.

If PHP docs say something is insecure, then of course, by the grand scale of things it's a potential fuckup worthy of YOSPOS.

Me and a fellow goon/colleague discovered something similar w/ rand... but then it's quite easy to find. The thing I love about the rand functionality is that it generates up to 32,768 numbers per box. Now, imagine the hilarity that ensues when you're trying to import over 60,000 products and the product ID is generated using rand()...

ultramiraculous
Nov 12, 2003

"No..."
Grimey Drawer

Westie posted:

The thing I love about the rand functionality is that it generates up to 32,768 numbers per box.

..why? Just why. i mean I'm sure it's just "because PHP" but why is it only positive, signed 16-bit integers?

Westie posted:

product ID is generated using rand()...

...also why? I mean there's times where sequential ids might be a security fuckup, but it seems like products in a catalog isn't one of them?

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

ultramiraculous posted:

..why? Just why. i mean I'm sure it's just "because PHP" but why is it only positive, signed 16-bit integers?

It's a thin wrapper around the C rand() function, and rand()'s requirements were specified in 1989.

canis minor
May 4, 2011

gh... disregard me.. misread

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm sorry but the pedidos/peidos order/fart portugese typo needs more attention. Just imagine that one going through:

Welcome to our Portugese ecommerce portal. Thank you for your fart. Here is your fart summary...

Your fart has shipped!

Problem with you fart? Do you need to change your fart?

Pollyanna
Mar 5, 2005

Milk's on them.


Fartmeter. Fartphile.

Mogomra
Nov 5, 2005

simply having a wonderful time
Which one is more of a horror?
  • The fact that I had to run '$ git checkout .' in production to reset some things that I changed to debug some loving stupid code.
  • The fact that I also cleared out some changes my boss made in production because he couldn't be bothered to use source control.

The Boss posted:

when you are on production, while we try very hard not to make changes there, sometimes there is a need, so it is best to use github for windows and do a sync so it commits changes

Because I stupidly assumed he wasn't developing in production without telling any of the other devs.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Mogomra posted:

Which one is more of a horror?
  • The fact that I had to run '$ git checkout .' in production to reset some things that I changed to debug some loving stupid code.
  • The fact that I also cleared out some changes my boss made in production because he couldn't be bothered to use source control.


Because I stupidly assumed he wasn't developing in production without telling any of the other devs.

The horror is that you are able to edit prod directly instead of pushing deployments.

fritz
Jul 26, 2003

Always do a 'git status' unless you're absolutely certain of what it'll be.

Westie
May 30, 2013



Baboon Simulator

ultramiraculous posted:

...also why? I mean there's times where sequential ids might be a security fuckup, but it seems like products in a catalog isn't one of them?

This is one of those design decisions that were in the system way before I joined, and I'm glad to say that there is some progress being done to make the IDs sequential or something like that.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Volmarias posted:

The horror is that you are able to edit prod directly instead of pushing deployments.

Seriously. There's almost never a reason for a developer to be poking around in a live production server, and there's absolutely never a legitimate reason for a change to be on a production box but not in source control.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Spatial posted:

It's for an embedded radio transceiver that runs off a little battery. Normally there's a host CPU which controls what it does, but having user code running on the transceiver means you can power the big ol' CPU down and allow the transceiver to run autonomously, giving you longer battery life.

The language allows you to control the state machine of the transceiver, send/receive packets, do arithmetic operations, read/write memory, wait on hardware interrupts, control GPIO pins and the like. You can also put the transceiver to sleep, cause some code to run when a packet is received, all sorts of cool things.

e: I say this in the present tense, but... :v:

use forth

Milotic
Mar 4, 2009

9CL apologist
Slippery Tilde
You might be dealing with a high-latency/low bandwidth link where you need to tweak a configuration value to get you or your client back in the market *right now*. Cutting a new version and deploying out takes time. That's an appropriate reason. The real horror is not immediately then merging the change back into source control so that your changes are re-releasable.

And it's perfectly reasonable for developers to have access to and indeed troubleshoot and fix problems in production when you're geared towards minimising downtime.

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."

I was trying to resist the urge to make that suggestion. :shobon:

necrotic
Aug 2, 2005
I owe my brother big time for this!

Milotic posted:

And it's perfectly reasonable for developers to have access to and indeed troubleshoot and fix problems in production when you're geared towards minimising downtime.

Maybe at the start, but once you grow up as a company it's a terrible idea. Provide tools that allow you to see what's happening. Things like Log Stash, New Relic, and Librato can provide a ton of insight with no direct access required.

And once you're dealing with a cluster of servers, not just one box, pushing out a new deploy is certainly better than changing poo poo on one box at a time.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
If pushing out a new version through your normal deployment process takes so long that you have to skip it sometimes, then you should fix your deployment process.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Plorkyeran posted:

If pushing out a new version through your normal deployment process takes so long that you have to skip it sometimes, then you should fix your deployment process.

OK, I'll get right on that.

It takes more than an hour to push a deploy at work, which we do twice a day. (Used to take a lot longer, before we replaced the C++ code generation with a JIT.) If we're seeing a failure only on certain hosts, it can make a lot of sense to connect directly to one of them, observe, change a configuration parameter, selectively kill processes, attach a debugger, maybe selectively apply an instrumentation patch or test a fix in place. If you have to do a full deploy you risk losing the state that will help you find the actual transient problem. (And you don't risk mistakenly deploying your scratch change to the whole fleet.)

When you're done the machine gets reprovisioned back into the normal fleet configuration of course, and you don't do it in order to change what is "deployed in production", but rather to use production state to analyze a problem. Still, absolutist "you should never have anything on a prod host that wasn't fully deployed" is counterproductive.

Mogomra
Nov 5, 2005

simply having a wonderful time

Volmarias posted:

The horror is that you are able to edit prod directly instead of pushing deployments.

Oh my god, you obviously don't understand the intricacies of Lean Startups or Minimal Viable Product. - My Boss

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Mogomra posted:

Oh my god, you obviously don't understand the intricacies of Lean Startups or Minimal Viable Product. - My Boss

What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model?

Mogomra
Nov 5, 2005

simply having a wonderful time
I'm pretty sure I explained what happens pretty well. We both RDP, yes RDP, into THE loving production server. Hilarity ensues and I get yelled at because I'm bottom on the ladder.

ijustam
Jun 20, 2005

I wish we had a deployment process.

ohgodwhat
Aug 6, 2005

I really enjoyed when an intern looked at me like I was crazy when I told him we don't deploy things to production the same way one does to heroku.

FamDav
Mar 29, 2008

Subjunctive posted:

What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model?

sounds like you're talking about a deployment error and not an active error? Taking a failing host out of your capacity and analyzing it is perfectly acceptable. Attaching a debugger to a live host that is interfacing w/ production services that vend confidential data is not*.

fyi i find that idea that there are two deployments for a single day appalling, as well as what sounds like a single repository?

* this is some perfect world stuff because no company that is big enough to care about this started out this way and there will almost always be mechanisms for doing this very bad thing.

FamDav fucked around with this message at 04:27 on Jul 10, 2014

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

FamDav posted:

Taking a failing host out of your capacity and analyzing it is perfectly acceptable. Attaching a debugger to a live host that is interfacing w/ production services that vend confidential data is not.
You say that like they give the slightest gently caress about their users' privacy.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Subjunctive posted:

What do you guys do when more than one person has to diagnose a problem that's occurring in production? Take turns deploying your instrumentation and experiments? Try to set a deathtrap that will give you a core dump when the problem occurs, and then New Relic it back to a development server to inspect? Coordinate them all on one branch and hope nobody needs to push a mainline change until you're done? I like "freeze this host out of the deploy set, have developer figure out wtf, add it back to the set when she's done", but Im sure there are other ways. What's your model?

Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Plorkyeran posted:

Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do.

Sure, you can push to a tier, and we do, but that doesn't get you the data you need in many cases, because deploying means restarting and losing JIT state and cross-data center flow control and oh you were live locked with another service that's cool it'll happen again at some point. And the tiers are dynamic, so if the problem is currently manifested on host 99754 then you need to get to that host in its current configuration, not whatever set of 100 machines get allocated in for your test tier push. That erroring host is allocated, which is why it's getting traffic and triggering the thing you want to investigate.

We have pretty awesome instrumentation and dynamic configuration control, but that just means that the things which require host interaction are weirder (like JIT bugs), not that they don't happen. You don't log into the hosts for the things you can instrument your way out of, you log in for the cases where the data you see doesn't make sense and you need to learn more about wtf is actually going on. Pushing a new version of code is a pretty crude way to inspect the state of a system that has entered a failure mode. Mostly these problems don't occur until many ops have been serviced, which is how they make it into production in the first place.

Edit: I do not feel the slightest bit bad about multiple daily minor pushes, nor about the weekly major push, except that I'd love if we could do it more often without increasing other risk/cost (and we do roll hotfixes, but they're disruptive in a bunch of ways). Latency from "ready" to "deployed" is to be minimized, it benefits nobody.

Subjunctive fucked around with this message at 06:07 on Jul 10, 2014

TheresaJayne
Jul 1, 2011

Ithaqua posted:

Seriously. There's almost never a reason for a developer to be poking around in a live production server, and there's absolutely never a legitimate reason for a change to be on a production box but not in source control.

I worked for a payroll company and was on the Tiger Team, our job was to tweak stuff directly on Production to get it running again then work out what was causing it to be wrong and passing that to the dev team as a defect report.

I was one of 6 people with access to production the 4 of the tiger team and the 2 DBAs

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY
Tiger Team

TheresaJayne
Jul 1, 2011

Yeah they had heard about computer teams called Tiger Teams in computing and called us that erroneously.

Where the real meaning of Tiger Team was a team of people who's job was to hack into the system and show where vunerabilities were

SirViver
Oct 22, 2008
I sure love the code an (ex)coworker of mine produced.




As you can see, the extensive documentation is quite helpful as well.

:shepicide:

nielsm
Jun 1, 2009



M_48_56_37_67_14 is a great class name and I will take the wisdom of this naming scheme to my own development efforts.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Plorkyeran posted:

Do you really have no standard way to deploy a specific branch to a specific subset of machines other than sshing in and dicking around directly on the servers? It's a rather useful thing to be able to do.

It's my company, we're the coding horror.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

SirViver posted:

I sure love the code an (ex)coworker of mine produced.




As you can see, the extensive documentation is quite helpful as well.

:shepicide:

That's ok, it's... it's auto generated isn't... oh god those aren't auto generated :gonk:

kitten smoothie
Dec 29, 2001

nielsm posted:

M_48_56_37_67_14 is a great class name and I will take the wisdom of this naming scheme to my own development efforts.

M_4_8_15_16_23_42 seems to have all these weird unpredictable bugs.

Adbot
ADBOT LOVES YOU

Space Kablooey
May 6, 2009



Please get rid of the line #3206, we cannot afford to have code outside standards.



It was a hell of a week.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply