|
OSI bean dip posted:i only need an account to do online grocery orders i heard this was happening so i proactively changed my password and then a week or so later they emailed to tell me to change my password because they detected a breach makes me wonder if they know which accounts were leaked or if they're just watching for pw changes and responding to that haha
|
# ? Feb 21, 2017 18:20 |
|
|
# ? May 18, 2024 03:57 |
|
troy hunt could make some serious money by setting up a watchdog notification service for businesses to sign up to, for a small yearly fee dude prolly doesn't need any more cash though, he's already got a massive house on the aus gold coast
|
# ? Feb 21, 2017 18:23 |
|
Bhodi posted:you're assuming CI is only used for spitting out artifacts, which is really limiting what a good setup can do. I have it so if a dev tags one of their branches the CI will pick it up and run it's tests for them, so they can repeatedly run different tests on feature branches in their own sandbox ahead of time to know before even creating the pull request into dev if it'll pass. it doesn't auto-test on every feature branch commit, only on main commits to the dev branch (which no one should ever commit anything directly to unless your group is tiny) if a dev is running their own feature branch who cares if they break the build? they're the only ones working in it.
|
# ? Feb 21, 2017 18:31 |
|
OSI bean dip posted:that said the rumour is that they're unsure how they got breached even though it seems to be based on previous breach data (ie: from Ashley Madison or whatever) being used to get access i'd also heard that it was less "someone broke the ice on our gibson" and more "people bad at picking passwords are also bad at not reusing passwords"
|
# ? Feb 21, 2017 18:34 |
|
flakeloaf posted:i'd also heard that it was less "someone broke the ice on our gibson" and more "people bad at picking passwords are also bad at not reusing passwords" they also have garbage password requirements
|
# ? Feb 21, 2017 18:35 |
|
flakeloaf posted:i'd also heard that it was less "someone broke the ice on our gibson" and more "people bad at picking passwords are also bad at not reusing passwords" not in my case, mine was randomly generated however, this OSI bean dip posted:they also have garbage password requirements is true. i usually use 12 characters upper/lower/number/symbol mixed but pc plus required 8 character max with no symbols
|
# ? Feb 21, 2017 18:40 |
|
Shaggar posted:if a dev is running their own feature branch who cares if they break the build? they're the only ones working in it. also good and related: make sure you don't just lint and do basic CI, better check for passwords as well because devs "whoops" them from their private code into dev all the loving time Bhodi fucked around with this message at 18:45 on Feb 21, 2017 |
# ? Feb 21, 2017 18:42 |
|
Cold on a Cob posted:not in my case, mine was randomly generated Pretty sure that was just a mass email to everyone who has an account?
|
# ? Feb 21, 2017 18:43 |
|
Bhodi posted:if you don't do it that way, you need the CI as a gatekeeper to prevent garbage making it's way directly into the dev branch and also for testing the feature->dev pull requests regardless iu guess I don't really see the problem w/ broken builds getting into the CI system since either A) The non-compiling code is critical for moving forward in which case it has to be fixed and building the code before it was checked in is pointless B) The non-compiling code is not critical for moving forward in which case we use the last compiled build idk maybe none of my projects are hugely monolithic like that.
|
# ? Feb 21, 2017 18:48 |
|
Shaggar posted:iu guess I don't really see the problem w/ broken builds getting into the CI system since either best case it becomes a joke, worst case egos get involved and the knives come out mostly it's managed by a mutual understanding of what being in a particular branch means, if you work with ppl who know there may be broken builds, that's fine for your org. but we work on a sprint where a release branch is branched off dev on a specific day and time, not necessarily by the people committing code, and they need to know it's at least in a mostly-functional state so they can start doing integration/acceptance testing Bhodi fucked around with this message at 18:55 on Feb 21, 2017 |
# ? Feb 21, 2017 18:51 |
|
invision posted:Pretty sure that was just a mass email to everyone who has an account? yeah could be. they're not sending to everyone at the same time because i read about it awhile ago but they could be doing it in chunks
|
# ? Feb 21, 2017 18:51 |
|
they email the most likely to be breached first
|
# ? Feb 21, 2017 18:53 |
|
oh yeah makes sense, i just read that they are now asking everyone to reset even if they already reset their passwords after the breach was known to have happened
|
# ? Feb 21, 2017 18:58 |
|
Harik posted:the command protocol I shot down was full of direct unauthenticated-to-priveleged fuckups because they trusted the ~~app~~ instead of assuming it was hostile. Shaggar posted:iu guess I don't really see the problem w/ broken builds getting into the CI system since either Jimmy the Junior Dev keeps "fixing" his assigned issues (lol) but in doing so he's breaking the build for the entire company because he's hard coding absolute paths to his local desktop or something dumb. Now, everyone's build is broken. Is it me? Is it the checked in code? Time to waste a bunch of time figuring this out. The build does get fixed, but everyone's time is wasted because of one single dumb gently caress. It's sort of manageable if your team is 3 people, but consider something with thousands of people all working in the same repository. You can still use yesterday's artifacts for your deployment, of course, so it's not like you're broken in production (you hope), but you're still eating away everyone's time. Having a build server gating your proposed commit is an unalloyed good and I'm not really sure how you would argue otherwise, but I guess PHP is just broken all the time anyway so YOLO.
|
# ? Feb 21, 2017 19:00 |
|
when you have a team collaborating on a single repository, having some amount of gatekeeping to stop patches from even appearing in master if they don't build + pass tests is pretty nice it's not fool-proof unless you're willing to make committing super painful, though. for example, the swift ci system will test a pr and then tag that commit if the build succeeds; once a commit is tagged, it can be merged. but we allow real merges there, i.e. if someone else merges their patch before you, you don't have to go back to ci for re-approval. so if there's a conflict, and it's not obvious enough to be detected by version control, you can break ci and someone will have to revert. but that's true anyway — we don't run every possible test as part of this "smoke-testing", and sometimes bugs are non-deterministic. still, even if it's not perfect, having that extra step of at least forcing a patch to pass tests on its own does a lot to keep the repository consistently clean, which is important when you have a large-ish team updating their checkouts on their own random-ish schedules but yeah, don't trust random people on the internet to not submit malicious patches
|
# ? Feb 21, 2017 19:03 |
|
i do have to admit using the build servers themselves to generate bitcoins is a spark of genius, since in a lot of cases part of the build process is to pull poo poo all over from random places on the internet (hello, maven) so outgoing firewalls are often already open to the build slaves the next step is obviously combing for :8080 jenkins and non-github stuff that's open facing so you can do the same thing with them. there are lots and lots of CI systems open to the internet out there
|
# ? Feb 21, 2017 19:06 |
|
Volmarias posted:
If your builds are so large that minor changes in one branch break productivity for everyone your design is probably pretty heinous. also its infinitely more likely that the junior dev will horde his code because the CI keeps rejecting him and then he puts in a slew of changes that have hardcoded garbage to pass tests but that breaks everything in runtime. now you've been building your own stuff against his broken code for weeks cause you've trusted the CI server to gate things for you.
|
# ? Feb 21, 2017 19:09 |
|
Shaggar posted:If your builds are so large that minor changes in one branch break productivity for everyone your design is probably pretty heinous. also its infinitely more likely that the junior dev will horde his code because the CI keeps rejecting him and then he puts in a slew of changes that have hardcoded garbage to pass tests but that breaks everything in runtime. now you've been building your own stuff against his broken code for weeks cause you've trusted the CI server to gate things for you. any junior dev that does an end-run around the testing system deliberately, well, that's not a dev thats around for very long. plus, you know, code review, right? collaboration? Bhodi fucked around with this message at 19:21 on Feb 21, 2017 |
# ? Feb 21, 2017 19:19 |
|
sure but one real great way to trigger an out of band code review is a check in that doesn't build. lots of jr devs aren't gonna ask for help cause they're just out of college and they dont understand that they didn't learn anything there. now you're creating a build environment that's hostile to them and they're gonna fall back on the bad habits their profs taught them. A jr dev dodging tests in order to get code to compile is to be expected of a jr dev because they don't understand why tests are important yet. I like failing builds as a mechanism to detect struggling devs who aren't asking for help. Also I want code in the repo even if its not finished. The last thing I want is code sitting on a laptop that hasn't been checked in in days because its not "perfect".
|
# ? Feb 21, 2017 19:31 |
|
Shaggar posted:If your builds are so large that minor changes in one branch break productivity for everyone your design is probably pretty heinous. also its infinitely more likely that the junior dev will horde his code because the CI keeps rejecting him and then he puts in a slew of changes that have hardcoded garbage to pass tests but that breaks everything in runtime. now you've been building your own stuff against his broken code for weeks cause you've trusted the CI server to gate things for you. our entire client codebase is a single repo. if you change the wrong poo poo you can absolutely stop the entire compile from happening. enterprise poo poo is like that oftentimes. stop thinking your 5-man-shop use cases are what everyone deals with.
|
# ? Feb 21, 2017 19:34 |
|
Shaggar posted:sure but one real great way to trigger an out of band code review is a check in that doesn't build. lots of jr devs aren't gonna ask for help cause they're just out of college and they dont understand that they didn't learn anything there. now you're creating a build environment that's hostile to them and they're gonna fall back on the bad habits their profs taught them. A jr dev dodging tests in order to get code to compile is to be expected of a jr dev because they don't understand why tests are important yet.
|
# ? Feb 21, 2017 19:42 |
|
e: nm, moved to politics security thread
Bhodi fucked around with this message at 19:50 on Feb 21, 2017 |
# ? Feb 21, 2017 19:46 |
|
cis autodrag posted:our entire client codebase is a single repo. if you change the wrong poo poo you can absolutely stop the entire compile from happening. enterprise poo poo is like that oftentimes. stop thinking your 5-man-shop use cases are what everyone deals with. your bad design is not shop size specific
|
# ? Feb 21, 2017 20:10 |
|
Bhodi posted:and that's part of why we have a mostly separate and stable dev and only-test-when-you-want feature branches and never deny / rollback direct commits, i can see using failing builds as a barometer but that's untenable in the long run or with larger groups; designing a substandard system because you're assuming people are going to otherwise cheat it is no way to go through life yeah I guess I never work on huge monolithic repos so its never a problem. god I cant imagine how painful those builds are even when they're working.
|
# ? Feb 21, 2017 20:11 |
|
Shaggar posted:sure but one real great way to trigger an out of band code review is a check in that doesn't build. lots of jr devs aren't gonna ask for help cause they're just out of college and they dont understand that they didn't learn anything there. now you're creating a build environment that's hostile to them and they're gonna fall back on the bad habits their profs taught them. A jr dev dodging tests in order to get code to compile is to be expected of a jr dev because they don't understand why tests are important yet. yeah, you definitely want to encourage code to get committed and tested regularly. the easiest way to do that is to allow ci to build+test from an arbitrary branch, but only make it a gatekeeper on the master / release branches. you can watch + comment on their work on their feature branches, and if they're not committing to them or seem to be flailing against ci, it's easy to step in
|
# ? Feb 21, 2017 21:25 |
|
yeah I'm deffo not talking about letting these guys commit into release branches or anything and the ability to check in on what they're doing w/out hovering over their shoulder is super gr8.
|
# ? Feb 21, 2017 21:38 |
|
OSI bean dip posted:i only need an account to do online grocery orders i hope that means "president's choice plus"
|
# ? Feb 21, 2017 21:56 |
|
https://googleprojectzero.blogspot.com/2017/02/attacking-windows-nvidia-driver.html NVIDIA driver bugs quote:Most of the vulnerabilities found (13 in total) in escape handlers were very basic mistakes, such as writing to user provided pointers blindly, disclosing uninitialised kernel memory to user mode, and incorrect bounds checking. There were also numerous issues that I noticed (e.g. OOB reads) that I didn’t report because they didn’t seem exploitable.
|
# ? Feb 21, 2017 22:13 |
|
"im trying to scan your device but everything is blocked, how do I fix that" - an actual customer
|
# ? Feb 21, 2017 23:06 |
|
https://twitter.com/MalwareTechBlog/status/834007198556680192 I have lovely opinions and as a result I got blocked
|
# ? Feb 21, 2017 23:45 |
|
lol in addition to the pcplus email, I got two fake Apple ID login phishing emails. fun times for all!
|
# ? Feb 21, 2017 23:49 |
|
Javid posted:I was really excited to be getting a phone with a fingerprint reader until I actually got it and found out it forces you to put in a code every so often regardless, thus defeating the time-saving purpose of having a fingerprint reader in a spot you'd normally grab the thing. Way to almost implement a good idea!
|
# ? Feb 22, 2017 00:12 |
|
|
# ? Feb 22, 2017 00:17 |
|
if your phone usually only required a fingerprint to unlock past the first boot/period of non-use, but if it were possible to 'hey siri' it into a state of requiring a pin/password again, would that legally count as obstruction?
|
# ? Feb 22, 2017 00:52 |
|
Thanks Ants posted:if your phone usually only required a fingerprint to unlock past the first boot/period of non-use, but if it were possible to 'hey siri' it into a state of requiring a pin/password again, would that legally count as obstruction? as an american, i hope that this administration at least gives us a supreme court case with the title Trump v. WeedGoku666
|
# ? Feb 22, 2017 01:28 |
|
This made me very sad, thanks thread!
|
# ? Feb 22, 2017 02:30 |
|
ayyy https://twitter.com/jeremiahg/status/834085560117440512
|
# ? Feb 22, 2017 04:08 |
|
OSI bean dip posted:i only need an account to do online grocery orders That looks suspiciously like a phishing scam.
|
# ? Feb 22, 2017 04:29 |
|
|
# ? Feb 22, 2017 07:53 |
|
|
# ? May 18, 2024 03:57 |
|
Volmarias posted:
Our implementation is to have CI on develop and release branches. Jimmy does his development on his own branch, peer code review on push to develop and senior code review on pushing a bunch of changes to release and you only swap things to the development production environment when you're happy it's working on staging. Everyone will be working on their own branch so you'll only encounter this sort of issue if the code review misses it and someone happens to branch from dev after the point the dodgy commit is made but before anyone notices dev staging servers are all hosed up. I mean it does happen, development servers are buggy 24/7 anyway, but it's pretty rare for some obvious "nothing works at all" fuckup to interfere with the productivity of other developers.
|
# ? Feb 22, 2017 09:45 |