|
Is there any hope of me understanding this moonspeak if I "think" in Python already? This is mostly just indecipherable garbage, and it didn't help that by default PowerShell won't run PowerShell scripts I'm trying to run a command on a bunch of files in a directory, this is the command, as outputed by the mkvmerge gui: quote:"C:\Program Files (x86)\MKVToolNix\mkvmerge.exe" -o "C:\\Users\\me\\Desktop\\bad folder\\file.mkv" "--default-track" "0:yes" "--forced-track" "0:no" "--display-dimensions" "0:1280x720" "--language" "2:eng" "--default-track" "2:yes" "--forced-track" "2:no" "-a" "2" "-d" "0" "-S" "-T" "--no-global-tags" "--no-chapters" "C:\\Users\\me\\Desktop\\good folder\\file.mkv" "--track-order" "0:0,0:2" It's big, and it's nasty, and for some reason it's got double quotes puked out all over it. What I'm doing is removing a language track from each mkv, then outputing it to a new directory with the same name. I could write a script to generate all 24 mkvmerge commands in Python in the time it's taken me to try and figure this out, but since I guess I'm a Windows admin I should at least try and figure this out the "right" way?
|
# ¿ Jan 28, 2012 10:08 |
|
|
# ¿ Apr 29, 2024 04:15 |
|
I'm using cwrsync to pull some files from a Unix server, and I've decided to do this little project in Powershell, because why not. To get it to run I have to modify some enviroment variables, which I can do in a bat script like this: code:
I can't just run it with the full path because it keeps grabbing at binaries in PATH to figure out what to do (insert: I bet if I specified full path for both rsync and ssh it would work, but I'd rather just figure this out). So, what am I missing here? E: Solved my own problem. This is what I was doing: code:
code:
FISHMANPET fucked around with this message at 23:08 on May 25, 2012 |
# ¿ May 25, 2012 00:08 |
|
Alright, powershell is pissing me off right now. I created a bunch of accounts with a python script that called dsadd, but I hosed it up and now I'm trying to fix the profile paths. I want to find all the profile paths that are on server2, instead of server1. So I say to myself, this sounds perfect for Powershell! Except Powershell can't read the profile paths out of most of my accounts, they just show up as null. dsget works just fine, and looking at them in AD Users & Computers I see the bad profile path, but get-aduser shows nothing. Nevermind what a loving task it was to get powershell to even search. And nevermind how impossible Microsoft has made it to actually get the AD cmdlets. Why in the hell would I have to run a special instance of Powershell (Active Directory Module for Windows PowerShell) to actually be able to query AD. Shouldn't running Powershell on the DC be enough?
|
# ¿ Aug 15, 2012 21:37 |
|
Nebulis01 posted:Curious, if you have time would you spin up a Server 2012 DC in test and see if the query succeeds in powershell v3? It's supposed to have improved/fixed a bunch of this poo poo I'll see if I can give it a try, as long as I don't need to put it into DNS, because AD doesn't control DNS, we have to go to the dark overlords to get SRV records put in.
|
# ¿ Aug 16, 2012 04:14 |
|
Misogynist posted:Import-Module ActiveDirectory That would have been just fine, except for a lot of the accounts the ProfilePath was listed as empty.
|
# ¿ Aug 16, 2012 16:17 |
|
From what I can tell, permissions in PowerShell are pretty awful, I had a script that modified permissions, and I just ended up calling xcacls in my script.
|
# ¿ May 10, 2013 17:38 |
|
You could use -PassThru, which will return the created object. My guess is that if it fails it will just fail and die out.
|
# ¿ Apr 22, 2014 05:07 |
|
To up one level you would go to .. so in your example it would be $LogPath = "..\Logs". To go up two levels just keep doing the same, "..\..\Logs" etc etc.
|
# ¿ Jul 24, 2014 20:21 |
|
This language is such garbage. Never have I dealt with a language where I have to spend 90% of my time just fighting the language because it doesn't behave rationally. I'm trying to change the local admin password on a bunch of servers. For reasons I don't understand WMIC isn't working so I'm trying to use invoke-command to run a command on the remote machine. I want to pass my script a list of servers that I want it to change the password on, and then I want it to return a result of what happens on each server, either sucess or failure. I keep wrapping things in try-catch blocsk and invoke-command just returns garbage as far as error handling goes. I want invoke-command to return an error, but the only way it will return an error is if the the computer you're running on doesn't exist. And it doesn't actually return an error that can be caught, I have to run it as a job and then do some fuckery to get an actual error out of it. If I want to get an error out of my script block that I'm running on the remote machine, that's a whole other can of worms. This doesn't seem that complicated, and one of those vaunted use cases that Powershell is supposedly great for. Automate a task on a bunch of servers, and let me know how it goes (is it the wanting to know how it goes part that I'm not supposed to be doing with powershell?) I'm about ready to find something to treat as a physical manifestation of powershell, so I can throw it across the room because it's all garbage. I've been fighting all week with this and all Powershell does is get in my way. Maybe it's because I'm used to programming in other languages that actually make sense? Do I need a lobotomy to think in Powershell terms? Here's my code, for reference, maybe I'm going about this entirely the wrong way? code:
|
# ¿ Jan 29, 2015 01:14 |
|
I don't think our firewall is allowing the ADSI connection through, but I can't really be sure because the error I get is not especially useful:code:
|
# ¿ Jan 29, 2015 17:05 |
|
Briantist posted:What about something like this? So I put this aside for a few days, and came back to it on Monday. The reason I was using -AsJob was that I was trying to catch the case where invoke-command couldn't connect to the machine for whatever reason. But after letting it sit for a while and coming back, I decided to add "-ErrorAction Stop" to the Invoke-Command so now it stops and properly throws an error when it can't connect.
|
# ¿ Feb 5, 2015 18:47 |
|
Mapped drives are per user, and when you run as admin you're running as another user, and that other user doesn't have the drives that your user does.
|
# ¿ Apr 5, 2015 22:09 |
|
Find the lines where it sets the subject of the email and add another variable to that. You can pretty easily get the hostname of a computer using built in powershell variables. Since you're new and sound like you want to learn I'm being kind of vague in the hopes that it will point you in the right direction of figuring it out yourself. Teach a man to fish blah blah blah. Also I'm too lazy to download the script myself and look at it
|
# ¿ Apr 17, 2015 02:22 |
|
So I'm writing a script to cleanup from an SCCM bug and I'm wondering how to approach the problem. The problem is that SCCM has created multiple folders that have identical contents and I need to find them. The folders have random names, but they're named with a GUID. So I'd probably need a regex to say "is this a GUID of form X" or not. I also don't know how many levels deep the folders go. So it will look something like this code:
|
# ¿ Sep 15, 2015 18:15 |
|
The specific bug is that when it imported a driver folder, it makes a copy of the folder for every INF file. the worst case was a 500mb Realtek driver that had 40 INF files so it made 40 copies of that folder, resulting in a 20Gb driver pack. So yeah SCCM spewed too many files out, but I don't know what files specifically it spewed, I just know that it if something is there twice then I need to mark it as bad (remediation will be done manually because it's kind of involved and scripting in SCCM sucks). But anyway I've got another thing I have to work on today before I can dig into this but hopefully I can get to it yet today and figure out what's going on.
|
# ¿ Sep 16, 2015 17:26 |
|
tl;dr I'm an idiot don't mind me OK I just have no idea what's going on, but every time I print anything, "Can't find file " gets put at the beggining. code:
E: AAAAUGH this has infected my entire environment! If I log out and log in, open a powershell window, and just write print "hello" in the window I get back "Can't find file hello" E2: Why am I using print? Did I fall on my head and think I was writing Python? I guess I'm an idiot and I keep calling the "print" command. FISHMANPET fucked around with this message at 22:07 on Sep 21, 2015 |
# ¿ Sep 21, 2015 21:41 |
|
That's kind of impressive, but also shouldn't have worked as written because I was only "printing" the word match.
|
# ¿ Sep 21, 2015 22:53 |
|
Welp, just printing out every file in your Temp dir, nbd.
|
# ¿ Sep 21, 2015 23:13 |
|
I had to solve a similair problem, mine was changing the password on 1000 servers running everywhere from Server 2003 up to 2012 R2. I was also interested in catching the various failure states.code:
Basically: I created $RemoteChangeScript, which is the script I would execute on the remote machine. Then I used a for each loop to go through each computer and try and use invoke-command on that computer. If it failed and went to my catch block, I created a new object that had the computer name (twice, because I got the name 2 ways) and the name of the account I was trying to change, as well as the specific error message, and then returned that. In the RemoteChangeScript if my command to change the password succeeded, I returned a similar object with an "error" message of "success. And if the script failed on the remote computer, I had a catch block that would return an object with that error. So no matter what, my script would return an object with the computer name, the account I was changing, the computer name again, and what the outcome of the command was. This let me separate the successes from the computers I couldn't connect to from the computers that for whatever reason couldn't execute the command locally.
|
# ¿ Sep 30, 2015 21:41 |
|
I'm confused why you can't just pipe this into import-csv. Is there a reason you need a singular array rather than a list of objects that can be treated like an array? I suppose that would get you each field as a string, and you'd have to manually split it up into an array and maybe even make a new object but that still seems easier than what you're trying to do.
|
# ¿ Oct 6, 2015 19:55 |
|
I've been writing all my output to CSV in my scripts because it's been for human consumption. But I discovered that if one of the properties of your exported objects is an array, export-csv doesn't like that (just prints System.Object[] instead the values in the array). So now I've discovered outputting XML and JSON. This changes everything!
|
# ¿ Oct 13, 2015 02:57 |
|
I've reached the limit of where the SCCM Cmdlets can take me, so I'm having to branch out into WMI queries for some of my stuff. And I have to say I'm just sort of lost. If I'm lucky enough to find someone online doing the same thing as I am I can just copy their query. But once I try and modify them I get lost, and I'm not really sure where to look for more help. I can find the MSDN documentation on the WMI classes, but that's pretty useless. Maybe I need a language primer/tutorial? Not really sure. I know some basic SQL so it's not like a WQL query is completely foreign to me.code:
Ok I figured out the join and why I don't need it, and rewrote the query like this: code:
To add more confusion to this, when I try and find articles about the WHERE clause, "in" isn't one of the listed operators, and I in fact found a Stackoverflow question that is my exact question, and the answer was "WQL doesn't have an IN operator" and yet there it is and it works. So I can't even really do research on what the found code is doing, because apparently what it's doing is impossible! http://stackoverflow.com/questions/19530825/in-operator-in-wql FISHMANPET fucked around with this message at 23:53 on Oct 16, 2015 |
# ¿ Oct 16, 2015 23:19 |
|
I'm back with another optimization question! (I gave up on that last one and just let the loop run 1000 queries because it was basically a one off thing) So now I'm working on a script to check some stuff in SCCM and notify people if they're doing dumb stuff, so this will be running at regular intervals. Part of it is that I have an object array of 1600 Collection objects. If you know anything about SCCM, we have a complex setup of limiting collections, and I'm trying to find the "root" limiting collection for each collection. So I have this poor man's recursion: code:
It'll run at night so ultimately I don't care how long it takes but it's a bear to test and if I can optimize it that'd be good either way, and also learning new things is good too.
|
# ¿ Oct 26, 2015 18:20 |
|
So I know that this line specifically is what's taking so long, because it's where I'm searching through 1600 items:code:
With this code: code:
|
# ¿ Oct 26, 2015 21:22 |
|
And Victory! I limited the properties in $collections to only those I wanted. So basically: code:
So the moral of the story is, the bigger your object, the longer where-object spends looking through it!
|
# ¿ Oct 28, 2015 05:16 |
|
You'll probably want a while loop:code:
If it's more complicated to figure out if it's done or not, you can use a do while (or do until) loop. A do loop will always execute at least once, so you can put in code to check if it's done and put that into a binary variable, and then use that variable in the while or until. While and until are just antonyms, so while ($variable) is the same as until (-not $variable).
|
# ¿ Nov 24, 2015 02:03 |
|
I'm trying to construct the parameters for an exe and executing it from within Powershell. For reference, I'm using CreateMedia.exe in SCCM: https://msdn.microsoft.com/en-us/library/jj155402.aspx I'm having trouble getting quotation marks to appear in the right places. I'm using the call (&) operator. To start I just manually created the command line and parameters I needed. code:
code:
So then I use these variables to build the call command. code:
Invalid parameter: Manager So, am I constructing this incorrectly? Is there another way I should be doing this? E: Found this article that includes a tool that will just print the arguments as passed, and it looks like the quotes are preserved but there are some extra ones, so now I'm not entirely sure what's going on. E2: And I was wrong about not being able to use variables in command parameters, I misinterpreted what I read. So I have very little idea what I'm doing here apparently. FISHMANPET fucked around with this message at 18:48 on Jan 7, 2016 |
# ¿ Jan 7, 2016 18:12 |
|
So that appears to pass the string the right way to the showargs.exe, but somehow fails miserably with createmedia.exe. The createmedia.exe logs the full parameter list it's given (sort of, it strips the quotes in the logs) and the parameters it understands, and it apparently wasn't able to read anything from that. E: Screw it, took my hand constructed string, dropped the variables I actually want to change, and it works just fine. code:
FISHMANPET fucked around with this message at 20:12 on Jan 7, 2016 |
# ¿ Jan 7, 2016 19:51 |
|
I think the author of the cmdlet has to write in support for -whatif, so that command just does it poorly. Are you having actual roubles executing the command, and/or is there a reason you're not using Disable-NetAdapterVmq?
|
# ¿ Feb 4, 2016 22:26 |
|
I would just like to chime in and say that I absolutely detest The ? and % in powershell (and similar constructs in other languages) because they help you write unreadable code. Sure it's nice to do on the command line, but if you're writing a script that any body has to look at ever (including yourself) for the love of God please use the full command name so that you can actually read the code. The pipe is forgiveable because it's a core feature of the langauge, but $_ bothers me as well (especially when you start nesting statements, it's hard to figure out what $_ will actually mean at any given point).
|
# ¿ Jun 7, 2016 17:31 |
|
You need to tell it what to export. I assume you want $AllADUsers exported? out-host -paging $AllADUsers or export-csv -path $path -NoTypeInformation $AllADUsers Oh yeah you're not even setting your output to anything. Assuming that select line actually finds something, you'd need to pipe that into out-host or export-csv.
|
# ¿ Jun 13, 2016 19:13 |
|
I've got a style question. I've got a script, and it does some things, but the person running it may not care about those things. So I'm using write-verbose. And I've got [CmdletBinding()] at the top of my script, so it can accept the -verbose flag. But when I do that, every command I'm running outputs verbose output as well, overwhelming my output! I could attach -Verbose:$false to every command I call to scilence it, but that sounds bad. I would just say screw it and write-host (problematic) or write-out my output. What's the "proper" powershell way to do this?
|
# ¿ Jun 14, 2016 23:09 |
|
OK I'm tearing my hair out over here and I just have no idea how I'm supposed to move forward. We have a powershell script. We run it to build new machines. We have an Enterprise Github subscription so the script is "stored" there. Our team (7 of us) share a single RDS server that we use as a tools server it's where all our "stuff" is. I want to keep our script in a place on that server, such that it will always be the latest version. Github is the "source of truth" so to speak and there should always be a copy of that source of truth on our tools server. I've come up with a number of possibilities on how to solve this problem, and they all seem bad, so I'm worried that something we're doing is wrong and should be done differently. The dumbest brute force method is to have a scheduled task that runs every <unit of time> to execute a script that does a git pull. That's totally non-elegant, and there's the (minor) issue of what happens when the script is updated in Github but the scheduled task hasn't been run yet. I could setup a github post-commit webhook. I could then either send that webhook to an azure automation service that would then fire off a git pull on the tools server. Or I could custom write something in powershell that will listen for that webhook and do a git pull when it receives the notification. Both of those seem convoluted and require a lot of custom programming. I could go all out and setup a CI process. I've gone slightly down the rabbit hole of looking at Visual Studio Team Services, but this is dramatically overkill, and is in no way aimed at a sys admin, but at a programmer. I could make it work, but there don't appear to be any built in "build" actions that say "copy this git repo when it changes" so I'm left writing my own script to do the git pull and having VSTS execute that when the repo changes. None of this stuff is particularly complicated (maybe writing a powershell service to listen for a web request is...) but it's still special snowflake code that I have to write myself rather than just copy from someone else smarter than me on the internet. Which is kind of the root of my problem, this doesn't seem like an uncommon scenario yet I can't find any information about how to easily solve this problem so I'm left wondering if I'm solving the wrong problem, in which case what should I be doing differently?
|
# ¿ Jan 20, 2017 00:05 |
|
Does anyone know of a good resources for "best practices" around error handling? I understand the mechanics of the try-catch block but at work we've just got so much code that catches the error and throws "error: error" or some nonsense like that and I'm sure other people have solved this problem beyond "throw it all in a try-catch block and shrug your shoulders"
|
# ¿ Aug 7, 2018 23:38 |
|
This is all just in scripts we run, and I'm absolutely tired of seeing poo poo like this:code:
|
# ¿ Aug 8, 2018 00:37 |
|
Is the problem all 68 workbooks opening at the same time, or having all 68 workbooks open at the same time? If it's just that launching them all at once causes your system to freak out, I'd add start-sleep -seconds 5 after every invoke item. This will cause your script to wait 5 seconds between calls of invoke-item. Also, powershell is way better at iterating than the way you're doing it. This doesn't really change how your script runs but it's much better powershell: code:
FISHMANPET fucked around with this message at 19:18 on Aug 24, 2018 |
# ¿ Aug 24, 2018 19:15 |
|
Do people have thoughts on foreach vs ForEach-Object? Is one or the other more readable? Better performing? More powershelly? I prefer foreach because it reads better to me but that could be the last vestiges of Python being my native language. My coworker uses ForEach-Object and it looks kinda dirty and messy but piping does seem more in line with doing things the Powershell way.
|
# ¿ Oct 9, 2018 23:44 |
|
fam just gently caress me upcode:
|
# ¿ Oct 10, 2018 21:45 |
|
See I have a CS degree so I've done tons of programming in languages (C, C++, Python) with that for loop formulation, so the first few times I wrote one in Powershell I really had to scratch my head to wrap it around it.
|
# ¿ Oct 10, 2018 23:19 |
|
|
# ¿ Apr 29, 2024 04:15 |
|
That seems way harder to read and understand compared to what you originally wrote.
|
# ¿ Nov 12, 2018 19:21 |