|
Erwin posted:This: If you want to see what you can do with a certain object, use get-member. In your case code:
|
# ? Jan 4, 2013 21:52 |
|
|
# ? May 30, 2024 13:02 |
|
Why does this workcode:
code:
code:
|
# ? Jan 8, 2013 16:04 |
|
Mierdaan posted:Why does this work The filter property on Get-ADUser is an LDAP filter. There isn't actually an LDAP attribute of 'passwordexpired'. Just using LDAP you have to calculate the max password age for the domain then figure it out from the 'pwdlastset' attribute.
|
# ? Jan 9, 2013 05:42 |
|
Does anyone know the optimal (if any) way I could store date/time information in Excel's date-time code, so that when I write out CSV's that for log script utilities I can just open them up and Excel can easily know what they are? If I didn't need the time I could just import the data pretty easily, but I can't figure out a way to import the time part of it. Edit: Just so I'm clear, the format I'm referencing is Excel's that looks like this: 41284.7083333333 is January 10, 2013 5:00pm. -Dethstryk- fucked around with this message at 00:23 on Jan 11, 2013 |
# ? Jan 11, 2013 00:19 |
|
-Dethstryk- posted:Does anyone know the optimal (if any) way I could store date/time information in Excel's date-time code, so that when I write out CSV's that for log script utilities I can just open them up and Excel can easily know what they are? If I didn't need the time I could just import the data pretty easily, but I can't figure out a way to import the time part of it. If you have it as a DateTime object you can use the ToOADDate() method to convert it. code:
|
# ? Jan 11, 2013 02:30 |
|
I have a series of commands that I'd like to run sequentially by scheduling a script file to run on a server at a specific time. Is there anything I need to do to make them run this way (first one runs, second one runs after the first is finished, etc) aside from just putting them in a PS1 file sequentially? (it's PS1 because it's an Exchange 2007 server, I'm scheduling some mailbox moves after hours so I don't have to loving wake up at 2 AM just to start a Powershell command)
|
# ? Jan 17, 2013 17:29 |
|
No, that's pretty much how PowerShell scripts work. Just make sure you're accounting for your expected amount of baditems - it sucks to schedule a bunch of mailbox moves overnight and wake up to find they all failed due to hitting a single baditem (the default BadItemLimit in Exchange 2007's move-mailbox is 0). edit: I know I keep harping on upgrading, but seriously, mailbox moves are so much nicer in 2010/2013. Mierdaan fucked around with this message at 17:51 on Jan 17, 2013 |
# ? Jan 17, 2013 17:46 |
|
peak debt posted:import-csv creates rows of columns. If you do a foreach over them, you still get an array of columns for each loop, even if it is a one-member array, and get-aduser expects a string. That was the issue, thanks. Always the little things that catch you.
|
# ? Jan 17, 2013 17:53 |
|
Mierdaan posted:No, that's pretty much how PowerShell scripts work. Just make sure you're accounting for your expected amount of baditems - it sucks to schedule a bunch of mailbox moves overnight and wake up to find they all failed due to hitting a single baditem (the default BadItemLimit in Exchange 2007's move-mailbox is 0). Upgrading is unfortunately not in the budget. I brought it up again and was told that the inconveniences (such as offline mailbox moves) of 2007 only affect me, therefore I could basically eat a dick. They don't give a gently caress if I had to do poo poo in the middle of the night.
|
# ? Jan 17, 2013 17:54 |
|
Powdered Toast Man posted:I have a series of commands that I'd like to run sequentially by scheduling a script file to run on a server at a specific time. Is there anything I need to do to make them run this way (first one runs, second one runs after the first is finished, etc) aside from just putting them in a PS1 file sequentially? Powershell workflows (3.0) are an interesting concept which might match up to your needs. Decent scripting guy on the basics of them: https://blogs.technet.com/b/heyscriptingguy/archive/2012/12/26/powershell-workflows-the-basics.aspx?Redirected=true
|
# ? Jan 17, 2013 18:04 |
|
If you just put c:\script.ps1 in the task scheduler, it will execute it with the default associationg for ps1. So, your server will be very happy to oblige and just open notepad for you. Make sure you add powershell -file 'c:\script.ps1' to have it actually execute. Also, don't forget to put plenty of logging in the script, since you won't be monitoring it. See this for an example task: http://blogs.technet.com/b/heyscrip...ell-script.aspx
|
# ? Jan 17, 2013 18:12 |
|
Hello my powershell expert friends, I have a question: I'm doing some deployment automation using TFS and Powershell. The process is as follows: TFS builds the app, drops it somewhere, and then uses PS remoting to execute a script on the deployment target using the TFS build service account. The account has local admin access on the target server. One of the steps is to install some prerequisites if they're not present. For example, .NET 4.0. There are some silent install options, but no matter what, it prompts for UAC elevation. I can't disable UAC on these machines. It works okay if I make the TFS build service account a domain admin, but that's obviously not a valid solution either. I've come up with nothing that works on Google so far. Any ideas? Note: I'm aware that this entire thing is stupid, and that a far more sane approach would be to build an image with all of the requirements preinstalled and then use that image in the event that a new environment is ever added. For incomprehensible reasons, that is not an acceptable solution. I've spent far more time trying to find a workaround than it would have to manually build out the client's 6 environments.
|
# ? Jan 17, 2013 19:18 |
|
domain admin likely works because you have a GPO that disables UAC for your domain admins (guessing, what makes sense). As far as I know the only way around UAC prompts is to disable UAC. Either temporarily or permanently, that's kind of the whole point of it. There are ways to temporarily disable it and re-enable it using various regkeys that won't prompt for elevation. They are essentially security holes but it's possible.
|
# ? Jan 17, 2013 23:47 |
|
adaz posted:domain admin likely works because you have a GPO that disables UAC for your domain admins (guessing, what makes sense). Any references on the registry hack path? I tried the LocalAccountTokenFilterPolicy one, with no success.
|
# ? Jan 18, 2013 00:29 |
|
Ithaqua posted:Any references on the registry hack path? I tried the LocalAccountTokenFilterPolicy one, with no success. Let me check with our application packager folks tomorrow, I don't know which one they have used/use.
|
# ? Jan 18, 2013 00:36 |
|
You could use the task scheduler to run the install with high privileges. You can use schtasks to create the task from the command line, then schtasks /run to execute it. For some odd reason I am thinking you might have to import the job from an XML file to get the "high privileges" enabled but that isn't really a big deal.
|
# ? Jan 18, 2013 04:32 |
|
Checked with my buddy on the package team on this, apparently it's not an issue for our SCCM installs because we're doing what this article says to do: http://csi-windows.com/blog/all/27-csi-news-general/335-how-to-silence-the-uac-prompt-for-msi-packages-for-non-admins And then on certain rare occasions where that won't work one of our SCCM install accounts has UAC disabled via GPO. Probably not of much help, sorry man. adaz fucked around with this message at 21:06 on Jan 18, 2013 |
# ? Jan 18, 2013 21:04 |
|
Jelmylicious posted:If you just put c:\script.ps1 in the task scheduler, it will execute it with the default associationg for ps1. So, your server will be very happy to oblige and just open notepad for you. Make sure you add powershell -file 'c:\script.ps1' to have it actually execute. Also, don't forget to put plenty of logging in the script, since you won't be monitoring it. I didn't figure this out until the next day, when the script didn't run as scheduled. Sigh.
|
# ? Jan 22, 2013 01:02 |
|
stubblyhead posted:If you have it as a DateTime object you can use the ToOADDate() method to convert it. I'll be damned, that was way easier than I expected. Thank you so much. I've been learning more and more with PowerShell, and just the utilities/scripts I've been able to put into play already makes me happy.
|
# ? Jan 22, 2013 18:28 |
|
I feel like such a noob right now: just dipping my toes in to Powershell and I'm already stuck. I'm trying to poll information about Java installs on a group of PC's, but I cannot get PS to contact the workstations with this script. When I run the code, I get "The RPC server is unavailable." code:
code:
I can't for the life of me figure it out what I'm doing wrong. I've tried storing the list of computers in a text file and cat'ing it, and I get the same error. Am I formatting the list wrong? What am I missing?
|
# ? Jan 25, 2013 21:09 |
|
Sounder posted:snip Welp, fixed it, moments after posting a help request. But I'm confused on why I needed to do it the way I did it. I ended up needing to change how the names were stored: code:
code:
I'm confused. With my original script, wasn't I storing the names properly? With "Select-Object Name", wasn't the script selecting out the list of names as strings, that could be fed to Get-WmiObject? capitalcomma fucked around with this message at 21:46 on Jan 25, 2013 |
# ? Jan 25, 2013 21:26 |
|
Sounder posted:
This should be your hint here. What you were doing was returning a list of objects, not strings. Those objects have a property named "name" which is a string, as well as some methods (e.g. ToString, GetHashCode) You can check that by code:
Mierdaan fucked around with this message at 21:44 on Jan 25, 2013 |
# ? Jan 25, 2013 21:42 |
|
How do I open an explorer window to a drive on a network path? That is, let's say I want a script to pop up an explorer window defaulting to \\COMPUTERNAME\c$. I found this from searching around: code:
I've tried: code:
|
# ? Jan 25, 2013 22:42 |
|
Mierdaan posted:When you explicitly use $_.name, you're pulling out the string property you actually care about, rather than an object. In the second line, you were trying to pass objects to Get-WMIObject's computername parameter, rather than the string values it expected. Aaaah, I had it rear end-backwards then. I thought my first method pulled strings and the other method pulled objects. Thank you for clarifying. I obviously have a lot more reading to do.
|
# ? Jan 25, 2013 22:48 |
|
AreWeDrunkYet posted:How do I open an explorer window to a drive on a network path? That is, let's say I want a script to pop up an explorer window defaulting to \\COMPUTERNAME\c$. code:
|
# ? Jan 25, 2013 23:03 |
|
ZeitGeits posted:
Weird, I tried it your way, and (just using the command prompt for simplicity's sake here) code:
code:
No idea why, but at least the workaround is simple. Thanks. e: On closer inspection, the command I was looking for was code:
|
# ? Jan 25, 2013 23:14 |
|
You can also use Invoke-Item to open a folder.code:
|
# ? Jan 26, 2013 03:45 |
|
I'm trying to import output from a program into PowerShell as an array. Right now it spits information out into a colon deliminated list like this:code:
code:
|
# ? Jan 28, 2013 22:41 |
|
text parsing is always a bitch but for those lines something like this would work probably. Hard to tell without seeing the whole long glob of stuff you have!code:
|
# ? Jan 29, 2013 09:36 |
|
e: Identity is not Identify. Oops.
vanity slug fucked around with this message at 10:41 on Jan 29, 2013 |
# ? Jan 29, 2013 09:48 |
|
adaz posted:first solution String.Split has a count parameter which you can use to limit the number of elements in the array. Also, if you use the -split operator (PS 2.0+), you can use regexes. code:
|
# ? Jan 29, 2013 18:25 |
|
Jethro posted:String.Split has a count parameter which you can use to limit the number of elements in the array. Also, if you use the -split operator (PS 2.0+), you can use regexes. how did I never look at the overloads list for split. good lord, thanks!
|
# ? Jan 30, 2013 17:15 |
|
That is beautiful. Here is the full log file I'm trying to parse: http://pastebin.com/cSLYgyk9 It's basically a dump of a raid controller's settings that I get by calling the raid controllers cli utility. So rather than the cumbersome way I was doing it, is there an easier way to just parse the log file for the virtual disks and make each one an array?
|
# ? Jan 30, 2013 17:30 |
|
This is probably simple, but I'm still new. I added the ability to use an XML configuration for a script I'm working on to ease deployment. The problem is that when I run the script using powershell.exe -file, either through task scheduler or run, it sets the working directory to C:\ instead of the folder the script is in. If I run the script from the ISE or from the right-click context menu, it uses the script's directory. Basically, when I run the script and pull the config file even with .\, it defaults to looking at it on the C: root. I was able to get around this after some Googling by adding a WindowsPowerShell directory to the user profile documents folder, with a profile.ps1 file that included a Set-Location switch. That makes everything work, but is there something I am missing here? The obvious solution to me would be to put the script directory in the config file, but now I'm just curious about why this is happening. Edit: Oh yeah. I can't put the script directory in the config file when I can't read the config file in the first place. I'm dumb. -Dethstryk- fucked around with this message at 04:05 on Feb 2, 2013 |
# ? Feb 2, 2013 01:46 |
|
-Dethstryk- posted:This is probably simple, but I'm still new. Look into using the variable $MyInvocation and the property $MyInvocation.MyCommand.Path (type literally that). Long stackoverflow answer: http://stackoverflow.com/a/6985381/965648
|
# ? Feb 3, 2013 12:10 |
|
I'm playing with the PS 3 Schedule Task functions and trying to get a task to start initially on startup, and then every 15 mins indefinitely after that. However, the New-ScheduledTaskTrigger is piss poor and this seems impossible. Current code: code:
Please don't make me use schtasks EDIT: gently caress this, I just made schtasks import the XML file. Fruit Smoothies fucked around with this message at 21:27 on Feb 3, 2013 |
# ? Feb 3, 2013 20:21 |
|
Titan Coeus posted:Look into using the variable $MyInvocation and the property $MyInvocation.MyCommand.Path (type literally that). Long stackoverflow answer: http://stackoverflow.com/a/6985381/965648 Thank you so much. That solved the problem exactly.
|
# ? Feb 4, 2013 03:22 |
|
Apologies as this is more of a sysadmin question than a programming question, but...how do you guys implement/deploy/distribute your scripts and functions? I'm writing up a script that will create user accounts. It basically just automates stuff like creating the user account (along with all of the group memberships, etc.), setting up profile folders, mailboxes, stuff like that. It's coming along, and I'd like to make it accessible to the rest of the IT department. Do you recommend deploying it via Group Policy? Maybe put it in a default path on their machines? A DFS share? I don't have much experience distributing code like this. capitalcomma fucked around with this message at 23:22 on Feb 5, 2013 |
# ? Feb 5, 2013 23:11 |
|
How would you guys handle this kind of situation? I want to randomly arrange a collection. I can do it like this:code:
|
# ? Feb 5, 2013 23:21 |
|
|
# ? May 30, 2024 13:02 |
|
stubblyhead posted:How would you guys handle this kind of situation? I want to randomly arrange a collection. I can do it like this: Believe this will work, as long as whatever you are checking implements the collection interface for array (Which 99.99999999% should) it'll be OK code:
|
# ? Feb 6, 2013 08:03 |