Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ConfusedUs
Feb 24, 2004

Bees?
You want fucking bees?
Here you go!
ROLL INITIATIVE!!





A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH. Thus, the combination of PowerShell and SSH will deliver a robust and secure solution to automate and to remotely manage Linux and Windows systems.

http://blogs.msdn.com/b/looking_for...-shell-ssh.aspx

That's pretty sweet, actually.

Adbot
ADBOT LOVES YOU

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Hughmoris posted:

You're the man! The solved my problem. Trying to fumble my way through automating some tasks at work and its amazing how easy PowerShell makes it.

A quick google search showed me how simple it is to incorporate regex into the Where-Object command:
code:
$data = Import-Csv c:\pwr_shell\testCSV.csv | Where-Object {$_.'Facility ID' -match "^B"}
That will show me results for only the patients at Facility B, which I can then pipe into a new CSV.

The curly braces in Where-Object are a scriptblock, so you can actually put any expressions in there that return or can be coerced to a boolean (true/false) value. You can even make it multiline.

Anyway what I'm getting at is that what you learned there about using regex is not limited to use in Where-Object. -match is an operator, like -eq (or equals) and you can use it where you would do something like that, like in an if statement:

code:
if ($name -match '^[RB]ob') {
    
}

high six posted:

So I just finished Powershell in a Month of Lunches a few weeks ago and I think I have the basics of PS down pretty well. Where would someone move next book-wise? For background, I am a helpdesk monkey with some occasional admin stuff that I do.
If you've finished the book, I think you should just start using PowerShell for yourself. As you go along you'll have questions and problems and then you can figure those out or post them.

ConfusedUs posted:

A popular request the PowerShell team has received is to use Secure Shell protocol and Shell session (aka SSH) to interoperate between Windows and Linux – both Linux connecting to and managing Windows via SSH and, vice versa, Windows connecting to and managing Linux via SSH. Thus, the combination of PowerShell and SSH will deliver a robust and secure solution to automate and to remotely manage Linux and Windows systems.

http://blogs.msdn.com/b/looking_for...-shell-ssh.aspx

That's pretty sweet, actually.
Read about this today. I'm so excited for this. This comes on the heels of them announcing PowerShell DSC for Linux (which incidentally runs on OMI which is like WMI but open source and runs on other platforms). Love this cross platform support from the company as a whole.

The Electronaut
May 10, 2009

Hughmoris posted:

You're the man! The solved my problem. Trying to fumble my way through automating some tasks at work and its amazing how easy PowerShell makes it.

A quick google search showed me how simple it is to incorporate regex into the Where-Object command:
code:
$data = Import-Csv c:\pwr_shell\testCSV.csv | Where-Object {$_.'Facility ID' -match "^B"}
That will show me results for only the patients at Facility B, which I can then pipe into a new CSV.

Just a heads up: remember that format- cmdlets are for formating and should only be used at the end of your command.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Briantist posted:


If you've finished the book, I think you should just start using PowerShell for yourself. As you go along you'll have questions and problems and then you can figure those out or post them.

Agreed. I didn't start getting better with powershell until I started using it for tasks at work.

Oh hey, I need to get the amount of RAM installed on this list of 30 servers, how can I use Powershell to get that information.

Over the last year I've built up a decent collection of starter scripts and one liners that I can modify for whatever I need. I mostly manage AD and users so most of my scripts are for those type of tasks.

Zaepho
Oct 31, 2013

skipdogg posted:

Agreed. I didn't start getting better with powershell until I started using it for tasks at work.

Oh hey, I need to get the amount of RAM installed on this list of 30 servers, how can I use Powershell to get that information.

Over the last year I've built up a decent collection of starter scripts and one liners that I can modify for whatever I need. I mostly manage AD and users so most of my scripts are for those type of tasks.

The WIin32_OperatingSystem class in the Root\CIMv2 namespace has a TotalVisibleMemorySize property that looks like like a good source of data for your need.

simply loop through each server something like the below pseudocode ad then do something clever/useful with the output of the WMI objects.

code:
[array] $myServers = Get-AllTheServers

foreach ($Server in $MyServers) {
     get-WMIObject -Computername $Server -namespace 'Root\CIMv2' -class Win32_OperatingSystem
}

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Zaepho posted:

The WIin32_OperatingSystem class in the Root\CIMv2 namespace has a TotalVisibleMemorySize property that looks like like a good source of data for your need.
I think that was a rhetorical question, like he was showing an example of a problem he had that he then solved with PowerShell. But I could definitely see someone saying "hey yeah, how would I do that?" so kudos!

Zaepho
Oct 31, 2013

Briantist posted:

I think that was a rhetorical question, like he was showing an example of a problem he had that he then solved with PowerShell. But I could definitely see someone saying "hey yeah, how would I do that?" so kudos!

Not responding while instructing customers how to build a public cloud (aka public butt) is probably a better plan. In any case, it was a welcome diversion from watching cluster validations run.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Zaepho posted:

Not responding while instructing customers how to build a public cloud (aka public butt) is probably a better plan. In any case, it was a welcome diversion from watching cluster validations run.
Gotta love cloud2butt
http://cloud-2-butt.tumblr.com/

Hadlock
Nov 9, 2004

Over the last two years we started off with scripts to check if a service was running, and restart it if it was stopped, nowadays we're installing .net code directly to the GAC using enterprise services remotely across the network for some custom code products. The great part about powershell is if you keep it all in the same general area where everyone can look at it, you just build and build on top of eachother's work. PSM1's allow for a pretty impressive level of function documentation which you can pull up directly from the ISE and is fully searchable, which is really handy if you know someone wrote a function to do a complex task but can't remember what it's named. It also helps prevent you from writing potentially buggy functions, or troubleshooting something that someone else has already figured out. Microsoft lets you make ps modules, but we've found using functions directly from a psm1 file is a lot better.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Hadlock posted:

Over the last two years we started off with scripts to check if a service was running, and restart it if it was stopped, nowadays we're installing .net code directly to the GAC using enterprise services remotely across the network for some custom code products. The great part about powershell is if you keep it all in the same general area where everyone can look at it, you just build and build on top of eachother's work. PSM1's allow for a pretty impressive level of function documentation which you can pull up directly from the ISE and is fully searchable, which is really handy if you know someone wrote a function to do a complex task but can't remember what it's named. It also helps prevent you from writing potentially buggy functions, or troubleshooting something that someone else has already figured out. Microsoft lets you make ps modules, but we've found using functions directly from a psm1 file is a lot better.
This is definitely true. I've started really breaking out a lot of functions into modules that we can import into and use in other scripts.

Also there is a unit testing framework for PowerShell called Pester, if you're into that kind of thing.

I've also been trying to get my co-workers to start putting their scripts in Git. I make sure all of my scripts going forward are in a repo and I push them to our GitLab, but it's a great thing to do. If you don't have a local git server, BitBucket offers free private repos for a team of up to 5 people. They also give unlimited everything for education institutions (you add a .edu email address to your account and it happens automatically).

One question though, what is the distinction you're making between a "ps module" vs. "executing functions directly from a psm1 file"?

A psm1 is a script module, which you would import into another script. The ISE doesn't even let you directly execute a psm1. Just curious about the terminology and your workflow.

12 rats tied together
Sep 7, 2006

IIRC, I maintained all of the scripts for my last windows-centric job inside the module which had the manifest (.psd1) which contained pointers to a bunch of script files (.psm1). Also had an update-manifest function which, well, updated the manifest. I think you had to copy-file .\yourscript.ps1 -destination /remote/path/to/module and then just run update-manifest -module "module name".

If you're struggling to get your coworkers to use something like git, I've found it's usually a lot easier to get people to use something a little simpler like CVS or SVN. Getting people to use version control in general is kind of a hassle, but having to explain working dir vs index vs head to people who don't historically use version control makes it even worse.

But, with just powershell stuff, I had absolutely no problems with a remote directory with the modules+manifests, subdirectories for the content of each module, and a script that updates the manifests dynamically. Instead of running version control we had automatic VM snapshots for the hosted machine. Definitely not best practice, but it seemed a little unnecessary to configure version control for a bunch of sysadmin scripts. :)

The only Gotchas! were that I needed to modify the new user script (for admins only, anyway) to add the remote module path to $env:PSModulePath and, naturally, all the scripts and tools you write need to be written with the understanding that are to be run at any target, from any machine, with any user (that has the appropriate permissions).

Hadlock
Nov 9, 2004

Briantist posted:


One question though, what is the distinction you're making between a "ps module" vs. "executing functions directly from a psm1 file"?

A psm1 is a script module, which you would import into another script. The ISE doesn't even let you directly execute a psm1. Just curious about the terminology and your workflow.

We execute about 95%+ of our powershell via Tidal scheduling software (cmd.exe /c powershell path/to/script.ps1) or a BMC product I'm not particularly fond of

One of the opening lines of each script says

code:
Import-Module \\path\to\functions.psm1
And inside functions.psm1 just looks like (psuedocode, I apologize)

code:
function foo(
[int] $blah
[string] $make
)
{
    write-host "There are $blah number of $make items"
}

function saladtime(
[int] $time
[string] $salad
)
{
    write-host "it takes $time to make salad type $salad"
    if($time -gt 20)
        {
            Write-Host "you don not have enough time to make salad"
            Return 1
        }
    else{Write-Host "you have $time, go ahead and make a $salad salad"; Return 0}
}
And then you just call the functions from the PSM1 like

code:
#getsalaldtime ps1, gets salad time given number of minutes left
Import-Module \\path\to\functions.psm1
$st = saladtime $time $salad
if($st -ne 0)
    {
        Write-Host "not enough time to make a salad!"
    }
else{Write-Host "Go ahead and make a salad"}
I'm sure I'm using the wrong terminology but that's how we use it. We use ISE to design but then actually run the PSM1 from the command line via various methods. Tidal's primary purpose is launching stuff from the commandline (it's like windows scheduler on steroids) and the BMC product has an agent that we use to deploy code and register it etc via powershell.

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
I've got a little script whose job is to locate and open an excel file. The problem is that my computer is somewhat arsed, and opening excel files directly (eg, via a double click) often triggers an error to the effect of An error occurred in sending the command to the application. This results in Excel opening, but failing to open the file. When I double click the file again, it opens in the already opened Excel.

My PS script is hitting the same error:

code:
ii : An error occurred in sending the command to the application
At C:\Users\Newf\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1:54 char:61
+     ls C:\Users\Newf\Desktop\Timesheets "*$($ts[0])*" | ii
+                                                         ~~
+ CategoryInfo : NotSpecified: (:) [Invoke-Item], Win32Exception
+ FullyQualifiedErrorId : System.ComponentModel.Win32Exception,Microsoft.PowerShell.Commands.InvokeItemCommand
As you can see, the file is being passed to ii (invoke-item). Is there a way for me to catch the error and try the command again?

Hadlock
Nov 9, 2004

Try wrapping each major block of code in a try catch statement
code:
    try
    {
      Write-Host "attempting to invoke item on xls file..."
      Invoke-Item \\path\to\blah.xls
      Write-Host "success!"
    }
    catch [Exception]
    {
      echo $_.Exception|format-list -force
      Write-Host "Invoke item failed."
      Exit 1
    }
It's usually a good idea to put any mission critical lines of code inside of a try/catch so your code knows how to gracefully fail (replace the deleted file if the copy fails, email a human if the automated process can't find the file, etc). Most of our psm1 functions have at least one, if not two or three try/catches. It's always fun if your script doesn't fatally exit and continues executing while missing some crucial component and wreaking unknown havok. (get computers in a list that start with "serv", format d: drive on those computers..... but due to a weird bug can't find any servers with "serv" so it just formats the D: drive of all computers in the list)

http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx
http://www.leaseweblabs.com/2014/01/print-full-exception-powershell-trycatch-block-using-format-list/

Hadlock fucked around with this message at 01:06 on Jun 9, 2015

Hadlock
Nov 9, 2004

I think if you're interfacing with Excel you need to create a COM object for the excel instance and then interact with the object. Powershell has deep hooks in to Office using COM objects, you can parse outlook inboxes that way.

http://www.lazywinadmin.com/2014/03/powershell-read-excel-file-using-com.html

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
Hey, thanks. I'm not really trying to do anything with the file other than opening it, so the COM object probably isn't required. But the try/catch block looks like the right track - will give it a shot tomorrow.

12 rats tied together
Sep 7, 2006

Hadlock posted:

code:
Import-Module \\path\to\functions.psm1

I believe the benefit for creating and maintaining all the components of an actual 'module' instead of pointing the import-module cmdlet towards a bunch of script files is that you could do "Import-Module Hadlock" and gain access to all the content in all of your psm1s. Additionally (IIRC), providing that the module manifest is in your $env:PsModulePath, you don't even need to have the 'import module' in your scripts because it will auto-search any appropriate modules when you attempt to call a cmdlet it isn't familiar with. You could just call saladtime -param -param or whatever, sort of like how you can use Get-ADUser and similar without loading the active directory module every single time.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Reiz posted:

I believe the benefit for creating and maintaining all the components of an actual 'module' instead of pointing the import-module cmdlet towards a bunch of script files is that you could do "Import-Module Hadlock" and gain access to all the content in all of your psm1s. Additionally (IIRC), providing that the module manifest is in your $env:PsModulePath, you don't even need to have the 'import module' in your scripts because it will auto-search any appropriate modules when you attempt to call a cmdlet it isn't familiar with. You could just call saladtime -param -param or whatever, sort of like how you can use Get-ADUser and similar without loading the active directory module every single time.
I think of a Module as a Library: a set of functions that are all related in some way or grouped for a specific reason. It makes sense to have many modules for different purposes.

There are some quirks to module auto-loading. If you're using PowerShell interactively (as a shell), then sure just try to use the cmdlet, but if you're writing a script you should explicitly import the module. It doesn't hurt, and you can also error out using -ErrorAction Stop which is helpful.

Hadlock
Nov 9, 2004

We maintain about 200 servers on a daily basis, managing which modules are loaded on which machine would be a nightmare; doing Import-Module \\networkpath\module.psm1 at the top of every script (and then giving that path wide read-rights) is a lot easier. This has worked really well for us over the last year as we've been expanding our scripting libraries, which measure about 3000 lines of code all told. You can just run the script on the remote machine using Invoke-Command and know it will always work and is always up to date. Our BMC and Tidal products allow us to pass in special environmental variables or just pass in variables as powershell script parameters, which is great.

We have two main psm1 function libraries, one for installing loose code our programmers dream up, and another for administrative functions like password changes, service manipulation, F5 load balancer etc.

For error handling we also have at the top of every script

code:
$erroractionpreference = 1
http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/09/hey-scripting-guy-march-9-2010.aspx

Hadlock fucked around with this message at 00:42 on Jun 10, 2015

12 rats tied together
Sep 7, 2006

Briantist posted:

There are some quirks to module auto-loading. If you're using PowerShell interactively (as a shell), then sure just try to use the cmdlet, but if you're writing a script you should explicitly import the module. It doesn't hurt, and you can also error out using -ErrorAction Stop which is helpful.

Yeah I read about this quirk a lot while we were coming up with the whole situation but, to be honest, I never actually ran into any problems with it. But you are right in that generally most of our tools were for shell usage interactively by an admin, we didn't have a ton of scripts running in the background all the time and the ones we did have were generally hard coded. IE: Instead of having a scheduled task for a local script containing " Import-Module "validate-configfile.psm1 , Validate-ConfigFile -path/to/file" we have a script file sitting on a server and the scheduled task is just powershell.exe /unc/path/to/script/file.

quote:

We maintain about 200 servers on a daily basis, managing which modules are loaded on which machine would be a nightmare; doing Import-Module \\networkpath\module.psm1 at the top of every script (and then giving that path wide read-rights) is a lot easier.
A module setup is functionally identical to "Import-Module \\networkpath\module.psm1", except instead of "\\networkpath\module.psm1" it is just "Import-Module Module-Name". You still need to have powershell's context have full read to \\path\to\module, you just add that path to $env:psmodulepath instead of having it inside Import-Module, depending on whether or not you have admins that understand what a "path variable" is this could either be a good or bad thing, but the concept is still the same.

Instead of pulling a psm1 file individually with Import-Module, you are pulling a psd1 file which is essentially a wrapper for multiple .psm1 files. You're still, in effect, "pulling" from the UNC path -- you aren't actually physically installing the module on each server. You get a bunch of added functionality with the manifest like the ability to list all of the cmdlets/functions in your module, the ability to set "module scope" $erroractionpreference (and other variables), so you don't need to have $erroractionpreference = 1 at the top of all of your scripts, you can require specific versions of the .NET framework, require that specifics .dlls are registered before importing the module, or require (and, with some effort, dynamically adjust your scripts) based on the version of powershell that the machine running the script has installed.

Granted, most of these features are better geared towards interactive tool writing than they are for background scripts, and you clearly have a setup that works for you which is awesome. However, you might want to take a look at the msdn page for module manifests anyway just so you are aware that they are a thing, because they might solve some problems for you in the future: link.

Tony Montana
Aug 6, 2005

by FactsAreUseless
Ok, I posted earlier but didn't get around to posting more code. That's ok.. today is another day and I have another thing I'm trying to do with Powershell..

So parsing security logs, Windows event log. We start with an export of the log which looks like this:

code:
Keywords	Date and Time	Source	Event ID	Task Category
Audit Success	10/06/15 2:32:18 PM	Microsoft-Windows-Security-Auditing	4634	Logoff	"An account was logged off.

Subject:
	Security ID:		USERNAME
	Account Name:		USERNAME
	Account Domain:		DOMAIN
	Logon ID:		0x4655f9c0

Logon Type:			2

This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer."
Audit Success	10/06/15 2:22:53 PM	Microsoft-Windows-Security-Auditing	4672	Special Logon	"Special privileges assigned to new logon.

Subject:
	Security ID:		SYSTEM
	Account Name:		SYSTEM
	Account Domain:		NT AUTHORITY
	Logon ID:		0x3e7

Privileges:		SeAssignPrimaryTokenPrivilege
			SeTcbPrivilege
			SeSecurityPrivilege
			SeTakeOwnershipPrivilege
			SeLoadDriverPrivilege
			SeBackupPrivilege
			SeRestorePrivilege
			SeDebugPrivilege
			SeAuditPrivilege
			SeSystemEnvironmentPrivilege
			SeImpersonatePrivilege" 
We want to use Powershell's string handling and parsing methods to pull out the usernames of actual users that are in this log and present them along with the timestamp of the activity.

A line such as

code:
Select-String -Pattern "([A][u][d][i][t][ ][S][u][c][c][e])" -Path C:\scripts\rawlog.txt -Context 0,3
Will pull all the actual authentications with the following 3 lines, you can see in our log example that would result in something like this:

code:
C:\scripts\rawlog.txt:2:Audit Success    10/06/15 2:32:18 PM    Microsoft-Windows-Security-Auditing    4634 
   Logoff    "An account was logged off.
  C:\scripts\rawlog.txt:3:
  C:\scripts\rawlog.txt:4:Subject:
  C:\scripts\rawlog.txt:5:    Security ID:        USERNAME
> C:\scripts\rawlog.txt:13:Audit Success    10/06/15 2:22:53 PM    Microsoft-Windows-Security-Auditing    
4672    Special Logon    "Special privileges assigned to new logon.
  C:\scripts\rawlog.txt:14:
  C:\scripts\rawlog.txt:15:Subject:
  C:\scripts\rawlog.txt:16:    Security ID:        SYSTEM
> C:\scripts\rawlog.txt:32:Audit Success    10/06/15 2:22:53 PM    Microsoft-Windows-Security-Auditing    
4624    Logon    "An account was successfully logged on.
  C:\scripts\rawlog.txt:33:
  C:\scripts\rawlog.txt:34:Subject:
  C:\scripts\rawlog.txt:35:    Security ID:        SYSTEM
> C:\scripts\rawlog.txt:80:Audit Success    10/06/15 2:16:10 PM    Microsoft-Windows-Security-Auditing    
4672    Special Logon    "Special privileges assigned to new logon.
  C:\scripts\rawlog.txt:81:
  C:\scripts\rawlog.txt:82:Subject:
  C:\scripts\rawlog.txt:83:    Security ID:        USERNAME
Ok great, but now what we really want to do is eliminate the noise. I've replaced USERNAME with real user names, but where it says SYSTEM is just Windows being awesome. Those SYSTEM events mean nothing and I want to exclude them from the log.

This is where things get sticky.

code:
Select-String -Pattern "[S][Y][S][T][E][M]" -Path C:\scripts\rawevents.txt -Context 3,0 
That perfectly does the inverse of what I'm trying to do. Point it at the raw log of events we've already removed the rest of the crap from and you'll end up with this:

code:
C:\scripts\rawevents.txt:1061:  C:\scripts\rawlog.txt:5766:    Security ID:        SYSTEM
  C:\scripts\rawevents.txt:1068:4672    Special Logon    "Special privileges assigned to new logon.
  C:\scripts\rawevents.txt:1069:  C:\scripts\rawlog.txt:5823:
  C:\scripts\rawevents.txt:1070:  C:\scripts\rawlog.txt:5824:Subject:
> C:\scripts\rawevents.txt:1071:  C:\scripts\rawlog.txt:5825:    Security ID:        SYSTEM
  C:\scripts\rawevents.txt:1073:4648    Logon    "A logon was attempted using explicit credentials.
  C:\scripts\rawevents.txt:1074:  C:\scripts\rawlog.txt:5839:
  C:\scripts\rawevents.txt:1075:  C:\scripts\rawlog.txt:5840:Subject:
> C:\scripts\rawevents.txt:1076:  C:\scripts\rawlog.txt:5841:    Security ID:        SYSTEM
  C:\scripts\rawevents.txt:1078:4634    Logoff    "An account was logged off.
  C:\scripts\rawevents.txt:1079:  C:\scripts\rawlog.txt:5866:
  C:\scripts\rawevents.txt:1080:  C:\scripts\rawlog.txt:5867:Subject:
It catches every instance of the word SYSTEM and the three lines before it. Great, if we were trying to find the SYSTEM events.. but we want to do the opposite! I want to delete these lines from the raw log, leaving me with just the events I want. I can do it a couple more times with meaningless noise and be left with the actual user logins.. the purpose of this exercise.

How would you go about doing this?

Tony Montana fucked around with this message at 07:06 on Jun 11, 2015

Bonfire Lit
Jul 9, 2008

If you're one of the sinners who caused this please unfriend me now.

Is parsing that data from the exported log a hard requirement, or could you pull the audit logs on the fly (this is easy to do with powershell)?

Sefal
Nov 8, 2011
Fun Shoe
i'm trying to do 2 things, both aren't working for me.

1st i want to do something simple.
I want to enter a samaccount name and get the extensionattribute1
code:
 Function Check-ADUser
{
Param ($Username)

    #write-host "Username = " $Username    
    #$Username = ($Username.Split("\")[1])
    $ADRoot = [ADSI]''
    $ADSearch = New-Object System.DirectoryServices.DirectorySearcher($ADRoot) 
    $SAMAccountName = "$Username"
    $ADSearch.Filter = "(&(objectClass=user)(sAMAccountName=$SAMAccountName))"
    $Result = $ADSearch.FindAll()

    If($Result.Count -eq 0)
    {
        #Write-Host "No such user on the Server" | Out-Null
        $Status = "0"
    }
    Else
    {
        #Write-Host "User exist on the Server" | Out-Null
        $Status = "1"
    }
    
    $Results = New-Object Psobject
    $Results | Add-Member Noteproperty Status $Status
    Write-Output $Results    
}
 get-QADUser "$username" -IncludedProperties extensionAttribute1 | Select-Object Name, extensionAttribute1
it doesn't use the variable but returns my own extensionattribute.

the 2nd problem i'm having is. I want to automate making an ad user as much as possible
I plan on using the job title as the field for copying the rights it needs from a coworker. and the department to place it in the right ou.

i think they are pretty easy to solve, yet i'm stumbling with this.

any ideas?

Tony Montana
Aug 6, 2005

by FactsAreUseless

Bonfire Lit posted:

Is parsing that data from the exported log a hard requirement, or could you pull the audit logs on the fly (this is easy to do with powershell)?

Using Get-WinEvent or something? You can, sure. You don't have to use the export.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Sefal posted:

i'm trying to do 2 things, both aren't working for me.

1st i want to do something simple.
I want to enter a samaccount name and get the extensionattribute1
code:
 Function Check-ADUser
{
Param ($Username)

    #write-host "Username = " $Username    
    #$Username = ($Username.Split("\")[1])
    $ADRoot = [ADSI]''
    $ADSearch = New-Object System.DirectoryServices.DirectorySearcher($ADRoot) 
    $SAMAccountName = "$Username"
    $ADSearch.Filter = "(&(objectClass=user)(sAMAccountName=$SAMAccountName))"
    $Result = $ADSearch.FindAll()

    If($Result.Count -eq 0)
    {
        #Write-Host "No such user on the Server" | Out-Null
        $Status = "0"
    }
    Else
    {
        #Write-Host "User exist on the Server" | Out-Null
        $Status = "1"
    }
    
    $Results = New-Object Psobject
    $Results | Add-Member Noteproperty Status $Status
    Write-Output $Results    
}
 get-QADUser "$username" -IncludedProperties extensionAttribute1 | Select-Object Name, extensionAttribute1
it doesn't use the variable but returns my own extensionattribute.

the 2nd problem i'm having is. I want to automate making an ad user as much as possible
I plan on using the job title as the field for copying the rights it needs from a coworker. and the department to place it in the right ou.

i think they are pretty easy to solve, yet i'm stumbling with this.

any ideas?

Why are you using the Quest AD cmdlets instead of Microsoft's?

I would do it with the standard AD cmdlets like this:

code:
Get-ADUser -Identity $Username -Property extensionAttribute1 | Select-Object -ExpandProperty extensionAttribute1
For problem 2, be more specific. Do you have this started? Is there a specific point you're stuck on?

Bonfire Lit
Jul 9, 2008

If you're one of the sinners who caused this please unfriend me now.

Tony Montana posted:

Using Get-WinEvent or something? You can, sure. You don't have to use the export.

I'd use an XPath filter for Get-WinEvent:
code:
Get-WinEvent -LogName Security -FilterXPath '*[System[EventID=4624 or EventID=4634 or EventID=4647]][EventData[Data[@Name="TargetUserSid"]!="S-1-5-18"]]'
Filtering by event data allows you to get rid of all of the NT AUTHORITY\SYSTEM logons (which that SID represents). You may also want to filter other service-related logons, like NT AUTHORITY\NETWORK SERVICE. You can then process the data you have further; if you just want a list that shows you logons and logoffs, you could use something like this:
code:
Get-WinEvent -LogName Security -FilterXPath '*[System[EventID=4624 or EventID=4634 or EventID=4647]][EventData[Data[@Name="TargetUserSid"]!="S-1-5-18"]]' | % {
    $Event = $_
    switch ($Event.Id) {
        4624 { $user=$Event.Properties[4].Value }
        {4634,4647 -contains $_} { $user=$Event.Properties[0].Value }
    }
    New-Object -Type PSObject -Prop @{Time=$Event.TimeCreated; Action=$Event.TaskDisplayName; User=$user.Translate([System.Security.Principal.NTAccount]) }
}
Unfortunately the EventRecords don't expose the names of the EventData properties as far as I can tell, so you'll have to make do with hardcoded numbers.

Hadlock
Nov 9, 2004

You can export the windows event logs as XML right? Powershell is great at parsing XML, json, etc. Don't lose all your hair trying to pull the data out of an unformatted text file.

Tony Montana
Aug 6, 2005

by FactsAreUseless

Bonfire Lit posted:

XPath filters

Ah yes, XPath filters. I was looking at those before, the synatx looked janky and it scared me off. I'm an AD guy, I know LDAP query syntax really well and the SQL equivalents.. so I thought 'how hard can XPath be?'. Yeah well, if you're going to put queries straight into the Event Viewer there are apparently hidden characters (like carriage returns) that need to be found and included and blah blah, gently caress that. BUT, it does look like this is really the way to do it with the native tools.. not only using it in Powershell but in the Event Viewer directly. Thanks for your examples, I'm going to give them a try.


Hadlock posted:

You can export the windows event logs as XML right? Powershell is great at parsing XML, json, etc. Don't lose all your hair trying to pull the data out of an unformatted text file.

Yeah sure, but you can see the parsing I'm trying to do.. how would it be different if the format was XML? You've still gotta do a find based on a string, capture lines around it and then (hopefully) delete them from the source. We can't go the other way because we don't know the names of the users in the log, so we can't specify them and leave the rest.

Tony Montana
Aug 6, 2005

by FactsAreUseless

Bonfire Lit posted:

I'd use an XPath filter for Get-WinEvent:
code:
Get-WinEvent -LogName Security -FilterXPath '*[System[EventID=4624 or EventID=4634 or EventID=4647]][EventData[Data[@Name="TargetUserSid"]!="S-1-5-18"]]'
Filtering by event data allows you to get rid of all of the NT AUTHORITY\SYSTEM logons (which that SID represents). You may also want to filter other service-related logons, like NT AUTHORITY\NETWORK SERVICE. You can then process the data you have further; if you just want a list that shows you logons and logoffs, you could use something like this:
code:
Get-WinEvent -LogName Security -FilterXPath '*[System[EventID=4624 or EventID=4634 or EventID=4647]][EventData[Data[@Name="TargetUserSid"]!="S-1-5-18"]]' | % {
    $Event = $_
    switch ($Event.Id) {
        4624 { $user=$Event.Properties[4].Value }
        {4634,4647 -contains $_} { $user=$Event.Properties[0].Value }
    }
    New-Object -Type PSObject -Prop @{Time=$Event.TimeCreated; Action=$Event.TaskDisplayName; User=$user.Translate([System.Security.Principal.NTAccount]) }
}
Unfortunately the EventRecords don't expose the names of the EventData properties as far as I can tell, so you'll have to make do with hardcoded numbers.

No, wait.. this is amazing. Thank you kindly! Wow.. been Powershelling pretty hard for a while now but this blew my mind. Thanks again!

edit: I called my manager over and told him 'this is going to blast your tits off' and ran the script on a production server. His tits took flight.

edit: How would you modify this to show you a single instance of each user? Assume we only want to see the most recent logon for each user.. how would you do it?

Tony Montana fucked around with this message at 06:51 on Jun 12, 2015

Sefal
Nov 8, 2011
Fun Shoe

Briantist posted:

Why are you using the Quest AD cmdlets instead of Microsoft's?

I would do it with the standard AD cmdlets like this:

code:
Get-ADUser -Identity $Username -Property extensionAttribute1 | Select-Object -ExpandProperty extensionAttribute1
For problem 2, be more specific. Do you have this started? Is there a specific point you're stuck on?

Briantist posted:

Why are you using the Quest AD cmdlets instead of Microsoft's?

I would do it with the standard AD cmdlets like this:

code:
Get-ADUser -Identity $Username -Property extensionAttribute1 | Select-Object -ExpandProperty extensionAttribute1
For problem 2, be more specific. Do you have this started? Is there a specific point you're stuck on?

I'm using the Quest AD cmdlets because the get-aduser gives me an error "there are no active directory webservices running...."
code:
#
# Create user accounts in AD and Exchange
# This script will take input from the host and create user accounts based on that information.
#
# Requirements:
#  [+] Exchange Management Console installed.


Add-PSsnapin Microsoft.Exchange.Management.PowerShell.E2010

$cfgTab = [char]9
$cfgCompany = "Company";
$cfgMailDomain = "@company.nl"; #E-Mail Domain

#=============================================================================
# A series of hash tables for office locations which are our OUs.
#=============================================================================
$cfgHeadOU = @{
  "OU" = "OU=accounts,OU=afdeling 2011,DC=dc.nl,DC=company.nl";
  "DC" = "company.nl" };

#=============================================================================
# Creates an array of the above hash tables.
#=============================================================================
$cfgHeadOU = @{
  "HeadOU" = $cfgHeadOU;
  };

#=============================================================================
# Gets a a list of mailbox databases 
#=============================================================================
Function chooseMailboxDatabase()
{
	$MbDbase = Get-MailboxDatabase -identity "exchange mailbox.db*"
	$NumOfDB = $MbDbase.Count
	$Number = 0
	$Choice = 0

	If ($NumOfDB -eq $Null)
	{
		Write-Host $MbDbase.Identity
		return $MbDbase.Identity
	}
	else
	{
		foreach ($mbxDB in $MbDbase)
		{
			#Write-Host "$Number . " $MbxDB.Identity
			$Number ++
		}
		$choice=$date.dayofyear % $number
		#write-host "choice = " $choice	
		return $MbDbase[$Choice].Identity
	}
}

#=============================================================================
# Checks AD for the user account.
#=============================================================================
Function Check-ADUser
{
Param ($Username)

    #write-host "Username = " $Username    
    #$Username = ($Username.Split("\")[1])
    $ADRoot = [ADSI]''
    $ADSearch = New-Object System.DirectoryServices.DirectorySearcher($ADRoot) 
    $SAMAccountName = "$Username"
    $ADSearch.Filter = "(&(objectClass=user)(sAMAccountName=$SAMAccountName))"
    $Result = $ADSearch.FindAll()

    If($Result.Count -eq 0)
    {
        #Write-Host "No such user on the Server" | Out-Null
        $Status = "0"
    }
    Else
    {
        #Write-Host "User exist on the Server" | Out-Null
        $Status = "1"
    }
    
    $Results = New-Object Psobject
    $Results | Add-Member Noteproperty Status $Status
    Write-Output $Results    
}

#=============================================================================
# Main processing
#=============================================================================

	Write-Host -foregroundcolor Yellow "This script will create a new AD user with mailbox."
	Write-Host -Foreground Gray "---------------------------------------------------------------"

	$SurName = read-host "Last Name"
	$GivenName = read-host "First Name"

	if ($surname.length -lt 4) {$short=$surname.substring(0,2,$surname.length)} else {$short=$surname.substring(0,2)} 

	$samaccountname = $givenname.substring(0,1) + $Surname.substring(0,2) 

	#=============================================================================
	# Check if username already exists.  If it does generate a different one	
	#=============================================================================	
	$Status = (Check-ADUser -username $samAccountName).Status
 	If ($Status -eq 1) {
		write-host -foreground Green "User ID already in use.  Incremented Surname by 1"
		$samaccountname =  $givenname.substring(0,1) + $surname.substring(1,2) 
	} Else { }
	$Status = (Check-ADUser -username $samAccountName).Status
	If ($Status -eq 1) {
		write-host -foreground Green "User ID already in use.  Incremented User ID by 1 character after the previously used one"
		$samaccountname = $givenname.substring(0,1) + $surname.substring(2,2)  
	
 	} Else { }
	$Status = (Check-ADUser -username $samAccountName).Status
	If ($Status -eq 1) {
		write-host -foreground Green "User ID already in use.  Incremented User ID by 1 character after the previously used one"	
		$samaccountname = $givenname.substring(0,1) + $Surname.substring(3,2) 
	} Else { }
	$Status = (Check-ADUser -username $samAccountName).Status
 	If ($Status -eq 1) {
		Write-Host -Foreground Red "User account already exists. Goodbye."
		Exit
 	} Else { }	
	$DisplayName = $Givenname + " " + $Surname

	$userPrincipalName = $GivenName + $SurName 


	#Exchange Specific
	$strMailAddress = $userPrincipalName;
	$strMailAlias = $samAccountName;
	#=============================================================================
	# Accept OU from host and verify it is in the array (and not NULL).
	#=============================================================================

	Write-Host -Foreground Gray "---------------------------------------------------------------"
        Write-Host -Foreground Green " "$DisplayName
        Write-Host -Foreground Gray "---------------------------------------------------------------"
	Write-Host " Logon ID:"$cfgTab$samAccountName;
	Write-Host " Password:"$cfgTab$PlainPassword;
	Write-Host " Display Name:"$cfgTab$DisplayName;	
	
	Write-Host -Foreground Gray "---------------------------------------------------------------"
        	
	Write-Host -foregroundcolor Yellow "If account information is correct, press ENTER to continue else CTL-C to exit"
	$stuff = read-host " "
		
	# Choose a mailbox database for this account.
	$mbDatabase = chooseMailboxDatabase
		
	Write-Host -Foreground Gray "---------------------------------------------------------------"

	
	$Password = ConvertTo-SecureString $PlainPassword -AsPlainText -Force

	# Create Exchange mailbox.
	New-Mailbox -Name $DisplayName -Alias $strMailAlias -PrimarySmtpAddress $strMailAddress -OrganizationalUnit $strOU -UserPrincipalName $userPrincipalName -SamAccountName $samAccountName -FirstName $GivenName -Initials $MInitial -LastName $SurName -Password $Password -ResetPasswordOnNextLogon $false -RetentionPolicy "Standard Retention" -Database $mbDatabase -DomainController $cfgOffices.Get_Item( $strOffice ).Get_Item("DC")  | out-null

	Write-Host -foregroundcolor Green "AD Account and mailbox created"	

	#=============================================================================
	# Wait for account to replicate
	#=============================================================================
	write-host -foregroundcolor Red "Waiting for AD replication..."
	Do { $Status = (Check-ADUser -username $samAccountName).Status 
		#write-host "Status = " $Status
		Start-Sleep -s 5 }
	While ($Status -ne 1)	

	# Set attributes on AD DS account.
	Get-Mailbox $userPrincipalName | Set-User -Company $cfgCompany-Headou $HeadOU
	write-host -foregroundcolor Green "Additional Attributes set"

	
	write-host -foregroundcolor Green "Processing complete"
This gives me an error at line 160-74. not a valid smtp address. that's not the problem. The mailbox stuff, i'm certain to figure out.
but i don't know what commands to use to let the created account flow in the right ou based on department and copy the rights through job title.

if someone could point me in the right direction

This is the script i used to generate a sam account name. now i want to let it flow into creating the user and letting it go in setting the correct rights.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Sefal posted:

I'm using the Quest AD cmdlets because the get-aduser gives me an error "there are no active directory webservices running...."
Ah, this probably means you still have Windows 2003 or maybe 2008 domain controllers? If you have any DCs running Windows 2008 R2 you can force the cmdlets to contact one of those with the -Server parameter.

You can also install Active Directory Web Services on any of the downlevel DCs so that the cmdlets will work against any of them:
http://blogs.technet.com/b/ashleymc...ontrollers.aspx

Venusy
Feb 21, 2007
Solved a problem in about 10 minutes yesterday: we've been doing an AD restructure at work, which has meant a lot of the old computer objects have been deleted, and we're starting over with laptop numbering, but the numbers aren't consecutive and the laptops are in different OUs. I had been running Get-ADComputer -Filter {Name -like "<CompanyName>-MOB-LT*"} | select Name | sort Name to grab the list and manually find the first one to assign out. I had been avoiding trying to get that automatically as I thought it'd be too complex, mainly because I was thinking the wrong way round - thinking about getting the list of active computers and filtering that down, rather than getting all potential computers within a range of numbers (001 to 999) and using a break after an error is encountered.

Dravs
Mar 8, 2011

You've done well, kiddo.
Looking for a bit of help with a script here. I want to output a repadmin /replsummary into a HTML file and e-mail it to a group. I found a great script which outputs the command into a HTM file with colour coding for the errors, however I cannot get the HTM file to send as the body of an e-mail.

Currently the script looks like this:

code:
# Get the replication info.
$myRepInfo = @(repadmin /replsum * /bysrc /bydest /sort:delta)
 
# Initialize our array.
$cleanRepInfo = @() 
   # Start @ #10 because all the previous lines are junk formatting
   # and strip off the last 4 lines because they are not needed.
    for ($i=10; $i -lt ($myRepInfo.Count-4); $i++) {
            if($myRepInfo[$i] -ne ""){
            # Remove empty lines from our array.
            $myRepInfo[$i] -replace '\s+', " "            
            $cleanRepInfo += $myRepInfo[$i]             
            }
            }            
$finalRepInfo = @()   
            foreach ($line in $cleanRepInfo) {
            $splitRepInfo = $line -split '\s+',8
            if ($splitRepInfo[0] -eq "Source") { $repType = "Source" }
            if ($splitRepInfo[0] -eq "Destination") { $repType = "Destination" }
            
            if ($splitRepInfo[1] -notmatch "DSA") {       
            # Create an Object and populate it with our values.
           $objRepValues = New-Object System.Object 
               $objRepValues | Add-Member -type NoteProperty -name DSAType -value $repType # Source or Destination DSA
               $objRepValues | Add-Member -type NoteProperty -name Hostname  -value $splitRepInfo[1] # Hostname
               $objRepValues | Add-Member -type NoteProperty -name Delta  -value $splitRepInfo[2] # Largest Delta
               $objRepValues | Add-Member -type NoteProperty -name Fails -value $splitRepInfo[3] # Failures
               #$objRepValues | Add-Member -type NoteProperty -name Slash  -value $splitRepInfo[4] # Slash char
               $objRepValues | Add-Member -type NoteProperty -name Total -value $splitRepInfo[5] # Totals
               $objRepValues | Add-Member -type NoteProperty -name PctError  -value $splitRepInfo[6] # % errors   
               $objRepValues | Add-Member -type NoteProperty -name ErrorMsg  -value $splitRepInfo[7] # Error code
           
            # Add the Object as a row to our array    
            $finalRepInfo += $objRepValues
            
            }
            }
$html = $finalRepInfo|ConvertTo-Html -Fragment        
            
$xml = [xml]$html
$attr = $xml.CreateAttribute("id")
$attr.Value='diskTbl'
$xml.table.Attributes.Append($attr)

$rows=$xml.table.selectNodes('//tr')
for($i=1;$i -lt $rows.count; $i++){
    $value=$rows.Item($i).LastChild.'#text'
    if($value -ne $null){
       $attr=$xml.CreateAttribute('style')
       $attr.Value='background-color: red;'
       [void]$rows.Item($i).Attributes.Append($attr)
    }
    
    else {
       $value
       $attr=$xml.CreateAttribute('style')
       $attr.Value='background-color: green;'
       [void]$rows.Item($i).Attributes.Append($attr)
    }
}
#embed a CSS stylesheet in the html header 
$html=$xml.OuterXml|Out-String
$style='<style type=text/css>#diskTbl { background-color: white; }  
td, th { border:1px solid black; border-collapse:collapse; } 
th { color:white; background-color:black; } 
table, tr, td, th { padding: 2px; margin: 0px } table { margin-left:50px; }</style>'
ConvertTo-Html -head $style -body $html -Title "Replication Report"| Out-File C:\ReplicationReport.htm

# Prepare e-mail

$from = "noreply@blah.com"
$to = "me@blah.com"
$subject = "AD Replication summary"
$SMTPServer = "mailrelay.blah.com"
$body = Get-Content ("C:\ReplicationReport.htm")

Send-MailMessage -From $from -To $to -Subject $subject -SmtpServer $SMTPServer -Body $body -BodyAsHtml $true
However the error I get is this:

code:
Send-MailMessage : Cannot convert 'System.Object[]' to the type 'System.String' required by parameter 'Body'. Specified method is not supported.
At C:\test.ps1:77 char:17
+ Send-MailMessage <<<<  -From $from -To $to -Subject $subject -SmtpServer $SMTPServer -BodyAsHTML $body
    + CategoryInfo          : InvalidArgument: (:) [Send-MailMessage], ParameterBindingException
    + FullyQualifiedErrorId : CannotConvertArgument,Microsoft.PowerShell.Commands.SendMailMessage
I thought I had it cracked by putting in the $body = Get-Content and the -BodyAsHtml $true switches, however they don't seem to have helped.

Can anyone suggest a way to get this HTM file into the body of an e-mail? I just want to set it up as a scheduled task to e-mail a team here with AD replication status once a week.

Edit: Bolded the parts I am having trouble with.

Dravs fucked around with this message at 16:33 on Jun 16, 2015

12 rats tied together
Sep 7, 2006

It's been a while since I've messed around with Send-EmailMessage but your problem looks to be that $body isn't what you think it is.

Try doing $body | GM. You might need to do something like $body.content, or change it to $body = get-content .\file (with no parenthesis).

Dravs
Mar 8, 2011

You've done well, kiddo.
Thanks, that put me on the right track.

I have fixed it now. The line:

code:
$body = Get-Content ("C:\ReplicationReport.htm")
I changed to:

code:
$body = Get-Content "C:\ReplicationReport.htm" | Out-String
And it works in all its glory!

Anyone feel free to copy this, it is pretty useful if you are lacking any monitoring on your AD environment.

Dravs fucked around with this message at 10:20 on Jun 16, 2015

Dravs
Mar 8, 2011

You've done well, kiddo.
Additionally, putting this script into a scheduled task does not work because scheduled tasks is apparently stupid. So I had to also make a batch file to launch powershell and run the script rather than letting scheduled tasks try and do it (which would return a 0x1 error everytime)

Batch file looks a bit like this:

code:
@echo off
Powershell.exe -executionpolicy remotesigned -File "C:\scriptname.ps1"

Dravs fucked around with this message at 10:20 on Jun 16, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Dravs posted:

Thanks, that put me on the right track.

I have fixed it now. The line:
code:
$body = Get-Content ("C:\ReplicationReport.htm")
I changed to:
code:
$body = Get-Content "C:\ReplicationReport.htm" | Out-String
And it works in all its glory!

Anyone feel free to copy this, it is pretty useful if you are lacking any monitoring on your AD environment.
Get-Content reads a file line by line. In PowerShell 3+, you can use Get-Content -Raw to read the whole file as a string as well.

Dravs posted:

Additionally, putting this script into a scheduled task does not work because scheduled tasks is apparently stupid. So I had to also make a batch file to launch powershell and run the script rather than letting scheduled tasks try and do it (which would return a 0x1 error everytime)

Batch file looks a bit like this:
code:
@echo off
Powershell.exe -executionpolicy remotesigned -File "C:\scriptname.ps1"
You could have just used -ExecutionPolicy Whatever in the scheduled task. I actually recommend several switches when scheduling PowerShell tasks.

Also, please use code tags instead of quote tags for your code. If we try to quote your post, the quote tags disappear, but the code tags would be included (I had to copy paste to quote them).

Dravs
Mar 8, 2011

You've done well, kiddo.

Briantist posted:

Get-Content reads a file line by line. In PowerShell 3+, you can use Get-Content -Raw to read the whole file as a string as well.

You could have just used -ExecutionPolicy Whatever in the scheduled task. I actually recommend several switches when scheduling PowerShell tasks.

Also, please use code tags instead of quote tags for your code. If we try to quote your post, the quote tags disappear, but the code tags would be included (I had to copy paste to quote them).

Thanks for that link, I will look at the other switches for execution policy. I didn't even know code tags existed, I have gone back and changed them all.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy
We're seeing a very strange problem with using winforms in PowerShell. Basically, it works fine in the console host. If we run it in ISE, it also runs fine, but after some random amount of time, ISE will completely lock up.

See our StackOverflow post for details (I've added a bounty to it if any of you have ideas and want to answer):
http://stackoverflow.com/q/30808084/3905079

Adbot
ADBOT LOVES YOU

stubblyhead
Sep 13, 2007

That is treason, Johnny!

Fun Shoe
I need to do some remote admin on an AWS instance using powershell, but the machine I need to do it from can only access the web through a proxy server. Assuming the AWS server is configured correctly am I correct in thinking that all I need to do is specify the proxy details with New-PSSessionOption and feed that into New-PSSession?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply