|
According to the PsExec page on the Windows Sysinternals site "the password and command are encrypted in transit to the remote system." https://technet.microsoft.com/en-us/sysinternals/pxexec. Of course as always it's worth verifying yourself with a packet capture. Usually my objections to PsExec are because it's being used in a manner that isn't appropriate. If you're doing a bit of ad-hoc troubleshooting then PsExec is fine but if you're attempting to implement large-scale automation then you should really be using WinRM (As Mustache Ride already mentioned). I see idiots implementing PsExec wrappers in PowerShell scripts all the time and it's dumb as hell.
|
# ? Aug 5, 2016 02:18 |
|
|
# ? May 13, 2024 07:54 |
|
My senior admin doesn't want to configure WinRM because he thinks it's insecure. Which means I can't just invoke-command, I have to PSExec and use the command that way. I would also have to automate configuring WinRM on every machine in the environment. I'll see if I can use one of those workarounds. At least I'm not a domain admin, there's a little less poo poo on the sandwich that way. I think. PSExec has really been making everything a pain in the rear end, that's for sure. I've cut it down to one use at this point by just putting all other relevant information like file locations into a local folder that I copy-item over, and just PSExec a batch file. Now the problem is getting PSExec to accept the username and password from get-credential, because right now I'm having to just put it into the script in plaintext and cut the password out between tests. I'm probably going to ask the powershell thread about that tomorrow. 22 Eargesplitten fucked around with this message at 03:09 on Aug 5, 2016 |
# ? Aug 5, 2016 03:04 |
|
Is anyone familiar with eDrive / IEEE 1667 / Samsung Encrypted Drive? I'm trying to wrap my head around why the drives leave the factory with the capability disabled and enabling it requires you to first set it to "ready to be enabled" from Windows then secure erase then reinstall the OS from scratch, no restoring images allowed. The drive didn't come with a boot disk or anything to enable eDrive before one goes through installing their entire OS just to wipe it What's more the Samsung secure erase bootable tool tells you to uncable your drive even if it's plugged directly into the board via M2, but it appears my Asus UEFI handles that just as well. I really don't want to go to all this work just to find out it somehow won't work after a clean install because I used the wrong manufacturer's secure erase, though at least I can restore an image if everything fails.
|
# ? Aug 5, 2016 04:11 |
|
22 Eargesplitten posted:My senior admin doesn't want to configure WinRM because he thinks it's insecure. Which means I can't just invoke-command, I have to PSExec and use the command that way. I would also have to automate configuring WinRM on every machine in the environment. Your senior admin is a dingus. WinRM can be easily configured via Group Policy and whatnot on all your devices. If you need to store a password securely then you can use ConvertFrom-SecureString and ConvertTo-SecureString. First you encrypt the password and output it to a file (Note that you have to run this command as the user account which will be executing the script as it uses PBKDF2 to derive the encryption key): quote:ConvertFrom-SecureString -SecureString (Read-Host -AsSecureString -Prompt 'Enter String') | Out-File -FilePath "$env:TEMP\encrypted.txt" Then in your script you can retrieve the encrypted password, convert it to a secure string and then create a PSCredential object along with the username: quote:$CredentialObject = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList ('username', (Get-Content "$env:TEMP\encrypted.txt" | ConvertTo-SecureString)) I use this method with scripts for Office 365/Exchange Online automation and it works nicely. Oh and lol, there's a PowerShell thread? I would be all up in that where's it at? Edit: it's probably already clear but the snippets above are examples. Don't just store the file in the temp folder, put it alongside the script and configure NTFS ACLs on it so that only the service account that will be executing the script can read it (Just a little bit of extra protection although not infallible). Pile Of Garbage fucked around with this message at 09:42 on Aug 5, 2016 |
# ? Aug 5, 2016 09:36 |
|
The Powershell thread is in Cavern of COBOL.
|
# ? Aug 5, 2016 16:12 |
|
Cugel the Clever posted:An honest, if inflammatory question: Does Classic Shell have legitimate use scenarios beyond autists obstinately refusing to adopt modern UI? On the one hand I read "customization", which I don't necessarily object to; on the other, I saw "Classic IE" listed as one of its selling points and nearly spit out my drink. Kind of makes me long for the good ol' days where being a bregrudging WIndows user meant goofing around with Litestep or bb.
|
# ? Aug 6, 2016 01:22 |
|
Stupid question from someone who has near-zero idea what he's doing and is just mucking about on a Saturday morning. I'm not particularly up on the vocab, so please forgive me if I call a shovel a spade: I've always been curious about how to go about actually checking signatures on things I download. Started by downloading latest version of Putty and, after a bit of aimlessly casting about for native Powershell commands, coming down on code:
code:
With basic SHA hash stuff sort-of figured out, I decided to take a look at GPG's signature verification whatsits. Pulled the latest executable from their website and decided to run the above tests on it, only to be somewhat miffed/confused that they only list the SHA1 checksums—have I misunderstood my reading elsewhere that SHA1 is better than nothing, but needlessly insecure relative to new algorithms? The SHA1 checksum matched up fine, but I decided to check the signature using an older GPG version I'd installed at some point in the past and never gotten around to actually playing with (or verifying the integrity of ). Downloaded the .sig file for the relevant installer from the website, ran code:
code:
quote:If the output of the above command is similar to the following, then either you don't have our distribution keys (our signing keys are here) Edit: Holy table breaking, batman. Fixed. Cugel the Clever fucked around with this message at 16:51 on Aug 6, 2016 |
# ? Aug 6, 2016 16:49 |
|
You need the public key used to sign the executable in your keychain. GPG is saying it doesn't have access to the public key, so it can't validate the sig. Grab the keys at the link they gave you (the signing keys) and import them into your GPG keychain and then do the sig check again.
|
# ? Aug 6, 2016 16:54 |
|
Cugel the Clever posted:With basic SHA hash stuff sort-of figured out, I decided to take a look at GPG's signature verification whatsits. Pulled the latest executable from their website and decided to run the above tests on it, only to be somewhat miffed/confused that they only list the SHA1 checksums—have I misunderstood my reading elsewhere that SHA1 is better than nothing, but needlessly insecure relative to new algorithms?
|
# ? Aug 9, 2016 20:58 |
|
Since everyone hates virus scanners here, what do I do about Windows Defender? Leave it on or turn it off? Also I noticed in my github account I have an option to upload a pgp key but I'm not really sure what this is. Is this just basically like having a second ssh key I need to babysit?
|
# ? Aug 10, 2016 15:35 |
|
Boris Galerkin posted:Since everyone hates virus scanners here, what do I do about Windows Defender? Leave it on or turn it off? Just leave it on but don't waste money on another product. As for PGP/GPG, it's there for verification that what you published is your code just to prevent tampering. It's trivial to falsify who you are in Git, so signing the commit adds a level of verification.
|
# ? Aug 10, 2016 16:06 |
|
OSI bean dip posted:Just leave it on but don't waste money on another product.
|
# ? Aug 10, 2016 17:12 |
|
I could contribute to the always exciting AV or no conversation, but instead I'll derail with google's newest brilliant idea: giving websites direct bluetooth access. Two main problems here: 1) WHY 2) See #1. Given how hilariously bad commodity device security is, I can't wait for a dual-exploit worm that infects wordpress using whatever 3-year-old bug 90% of the install base hasn't upgraded past that then uses bluetooth to scan for known-vulnerable IoT devices and install a polite request to send bitcoin on them.
|
# ? Aug 10, 2016 22:05 |
|
Yes, I'm sure this compromised website popping up a Bluetooth device chooser in response to a random click will totally exploit all those internet-of-things devices.
|
# ? Aug 11, 2016 04:07 |
|
Jabor posted:Yes, I'm sure this compromised website popping up a Bluetooth device chooser in response to a random click will totally exploit all those internet-of-things devices. A more worrying scenario is when someone does choose to bind their bluetooth audio device - a car stereo. Want to bet your life that there's no exploit on the control channel that will let them through to the CAN bus? They've already demonstrated glaring failures in things like ID3 tags on MP3-CD players to get you an exploitable overflow, and jumped from that to controlling the internal systems (locks, AC, etc). That's not a far-feched hypothetical, every part of that is something that's been demonstrated by researchers or blackhats at this point.
|
# ? Aug 11, 2016 05:03 |
|
Harik posted:A more worrying scenario is when someone does choose to bind their bluetooth audio device - a car stereo. Want to bet your life that there's no exploit on the control channel that will let them through to the CAN bus? They've already demonstrated glaring failures in things like ID3 tags on MP3-CD players to get you an exploitable overflow, and jumped from that to controlling the internal systems (locks, AC, etc). So it's really just "people might pair their vulnerable devices with a malicious app"? Aka "no worse than what people can already do"?
|
# ? Aug 11, 2016 07:39 |
|
Jabor posted:So it's really just "people might pair their vulnerable devices with a malicious app"? Except you don't even need to download an app or pair it, Spotify gets a poisoned ad which autopairs and fucks up whatever it can find with no user effort required.
|
# ? Aug 11, 2016 12:00 |
|
Looks like they've addressed that by requiring user input to pair - also the device you are connecting to needs to accept the pairing so I can't see how it's less secure than doing Bluetooth outside the browser - it's up to you to decide if that is already a garbage fire or not. I'm not saying that there's isn't room for filling the implementation with horrific bugs but it looks like there's at least a recognition that people might have security concerns over websites accessing physical devices. I think the use case for this is things like universal remotes where people can update the programming and the device templates without having to use native applications and gently caress around with the million different ways that different Bluetooth drivers present the pairing options.
|
# ? Aug 11, 2016 13:31 |
|
And then everyone sets it to remember the pairing settings, because people won't want to the pairing dance every single time. IoT will be a garbage fire, and I for one can't wait. It's poo poo like this that makes sure I'll be employed for a long time.
|
# ? Aug 11, 2016 16:24 |
|
You'd hope that each website requesting to pair with the device would be identified separately and any interaction would require confirmation on the device itself but lol I think we know which way that sort of discussion would go.
|
# ? Aug 11, 2016 16:33 |
|
Meh I'd trust the big browsers more than some lovely app created by god-knows-who to program my remote control, and the spec draft basically leads with security which is a good sign. IoT is probably going to lead to lots of scary/hilarious screwups but I don't see this as particularly bad (and maybe actually keeps more things off of the real internet (probably not the manufacturers want their data)).
|
# ? Aug 11, 2016 16:50 |
|
https://wicg.github.io/webusb/
|
# ? Aug 11, 2016 17:20 |
|
Hahahahahahaha. That's awesome.
|
# ? Aug 11, 2016 21:25 |
|
Just make ChromeKernel already
|
# ? Aug 11, 2016 22:00 |
|
wyoak posted:Meh I'd trust the big browsers more than some lovely app created by god-knows-who to program my remote control, and the spec draft basically leads with security which is a good sign. IoT is probably going to lead to lots of scary/hilarious screwups but I don't see this as particularly bad (and maybe actually keeps more things off of the real internet (probably not the manufacturers want their data)). That's not the problem. I trust the browser the way I trust my kernel - it's the website at the other end I don't trust to never make a mistake. The reason this is wanted is people hate installing an app for everything. With web-BT you can control your color-changing LEDs in a webpage, no app required. Yay! That's great, until they have an XSS vulnerability and now this website (who's only purpose is to control a specific brand of LED lighting, remember) lets anyone on the internet have direct access to some guaranteed-vulnerable IoT hardware. There's already exploits that install wifi sniffers on "smart" lightbulbs, because of course there are. It's the same for any smart-device and their associated "no-app required" website. Fitbit? Check. Bluetooth speakers with fancy controls/display that go beyond the standard audio HID profile? Check. Replace your streaming app with a website that's given bluetooth access? Of course, how else would the skip-track button get back to the server? Every one of those websites screams "hack me and gain access to every one of my customers and their vulnerable devices, because that's the only reason they'd be connecting."
|
# ? Aug 14, 2016 18:42 |
|
Fitbit doesn't have a no-app-required sync option though?
|
# ? Aug 14, 2016 20:42 |
|
Have you actually looked at the api, or are you just reacting based on how you've assumed things are going to work?
|
# ? Aug 15, 2016 08:48 |
|
ultramiraculous posted:Fitbit doesn't have a no-app-required sync option though? Web BT isn't a thing, yet. Jabor posted:Have you actually looked at the api, or are you just reacting based on how you've assumed things are going to work? quote:User Gesture Required So again, target a website where the user is expecting to interact with bluetooth, from the manufacturer of the device. XSS isn't a thing, really. quote:Write to a Bluetooth Characteristic It's not read-only. Are you willing to bet that proven-inept IoT manufacturers are suddenly going to manage to implement BLE properly? Because they're really not going to. It's not a complex thought. This is being made because people resist installing apps. So they buy the hardware and bookmark the manufacturer's Web-BT portal to use it. An XSS vulnerability combines with a guaranteed-to-exist bug in property parsing on the hardware and you've now exploited every single user who tries to change the color of their lighting. Of course the user is going to accept "connect to this device", that's exactly why they went to the website in the first place. The "best" part is the devices generally won't be/can't be updated, so every time a bug is found in the website the ancient well-known hardware bug is accessible again. Replace XSS with direct root access to the website for smaller manufacturers hosted on a VPS with no clue about security or using ancient frameworks because it still works and why update the page for the old product it when the new one is already being sold? Harik fucked around with this message at 18:09 on Aug 15, 2016 |
# ? Aug 15, 2016 18:05 |
|
Harik posted:Web BT isn't a thing, yet. Just saying, statements like this makes it look like you have no idea what XSS is beyond "something bad". What extra attack surface are you seeing that isn't present with a device-specific application that the user is expected to download?
|
# ? Aug 16, 2016 10:12 |
|
It's much easier to get hostile content into a browser than into an app, and typically that hostile content operates in a more flexible environment (scripting, wide access to system APIs). Faked email, Twitter "viruses", compromised ad networks, takeover of non-https sites on public wifi, site hacking. Even if an app exposes a URL scheme, it tends to be quite narrow. I think WebBT is fine and necessary, but it's definitely a different security landscape from BT-privileged apps.
|
# ? Aug 16, 2016 14:44 |
|
Subjunctive posted:It's much easier to get hostile content into a browser than into an app, and typically that hostile content operates in a more flexible environment (scripting, wide access to system APIs). Faked email, Twitter "viruses", compromised ad networks, takeover of non-https sites on public wifi, site hacking. Even if an app exposes a URL scheme, it tends to be quite narrow. Right, it's definitely a different security context with different implications. Many of those potential issues are affirmatively addressed in the specification, and others can be reasonable weighed against the security benefits of having this stuff happen in a sandboxed browser context instead of being required to grant a significant amount of trust to yet another opaque application. I'm mostly objecting to knee-jerking all the way to "this is a horrible and stupid idea" based on fears that don't seem to be entirely grounded in fact.
|
# ? Aug 16, 2016 15:32 |
|
Is the stuff here about key exchange protocols, ciphers, etc (ignoring the things like setting up Tor etc) up to date/good practice?
|
# ? Aug 17, 2016 09:01 |
|
Subjunctive posted:It's much easier to get hostile content into a browser than into an app, and typically that hostile content operates in a more flexible environment (scripting, wide access to system APIs). Faked email, Twitter "viruses", compromised ad networks, takeover of non-https sites on public wifi, site hacking. Even if an app exposes a URL scheme, it tends to be quite narrow. Apps are sort of the reverse. They are much more vulnerable to trusting the client too much/giving way more information than the client app actually needs or is providing. If I am understanding you correctly, you are considering the web service endpoints that the app talks to as the URL scheme?
|
# ? Aug 18, 2016 16:00 |
|
EVIL Gibson posted:Apps are sort of the reverse. They are much more vulnerable to trusting the client too much/giving way more information than the client app actually needs or is providing. No, I'm talking about the way you can launch an fb:// URL on iOS or Android and trigger some action in the Facebook app.
|
# ? Aug 18, 2016 17:20 |
|
Harik posted:...
|
# ? Aug 18, 2016 17:49 |
|
Not lightbulbs, but how about this from defcon? http://thenextweb.com/gadgets/2016/08/08/thermostats-can-now-get-infected-with-ransomware-because-2016/
|
# ? Aug 18, 2016 18:40 |
|
Interesting new password rules from NIST: https://pages.nist.gov/800-63-3/ Summary: https://nakedsecurity.sophos.com/2016/08/18/nists-new-password-rules-what-you-need-to-know/ quote:Size matters. At least it does when it comes to passwords. NIST’s new guidelines say you need a minimum of 8 characters
|
# ? Aug 18, 2016 18:42 |
|
CLAM DOWN posted:Interesting new password rules from NIST: https://pages.nist.gov/800-63-3/ These are good rules.
|
# ? Aug 18, 2016 18:49 |
|
CLAM DOWN posted:Interesting new password rules from NIST: https://pages.nist.gov/800-63-3/ These are excellent rules.
|
# ? Aug 18, 2016 19:13 |
|
|
# ? May 13, 2024 07:54 |
|
flosofl posted:These are good rules. ChubbyThePhat posted:These are excellent rules.
|
# ? Aug 19, 2016 04:24 |