|
Nevermind, moving question to the proper venue.
fed_dude fucked around with this message at 21:46 on Nov 7, 2012 |
# ? Nov 7, 2012 21:43 |
|
|
# ? May 21, 2024 15:09 |
|
You can do that in SCCM 2007 too. What I use is I download updates with https://www.wsusoffline.net Then inject them into the WIM with code:
|
# ? Nov 8, 2012 15:19 |
|
You can do it, but it sucks. With 2012 there's a way to track what you have and haven't put in, with manually injecting the patches with dism, you have to keep track of it separately.
|
# ? Nov 8, 2012 15:52 |
|
Moey posted:If it is for one off things (non-automated), take a look at this script someone put together one the spiceworks forums. You put in a PC name and it will show everything in the add/remove programs window with options to uninstall or silent uninstall (if available). That's exactly what I needed, thanks. It's a small vanilla 2008 domain here and I just had to check that there were no Teamviewer style apps on any of the computers remaining from the last admin. Unrelated: SQL Server Windows NT is taking up 1.5GB RAM on Server 2008. Is this just the Windows Internal Database? It seems higher than I've noticed elsewhere (though elsewhere is always 2008r2).
|
# ? Nov 8, 2012 17:14 |
|
Question for the SCOM pro's. I just started a new job and the previous SysAdmin installed SCOM 2012 but it's pretty much an out of the box setup. It seems like SCOM is a little overzealous and a CRITICAL alert is generated for everything even the stupidest poo poo. I did find this http://blogs.technet.com/b/kevinholman/archive/2008/06/26/using-opsmgr-notifications-in-the-real-world-part-1.aspx which helped me setup a subscription for only High Priority Critical alerts and that has definitely reduced the noise quite a bit. I'm wondering if following the article's suggestion of adding Overrides on priority to the stuff you want to actually be paged/emailed for is still the recommended way in SCOM 2012? Or is there a better guide/way to start customizing these alerts so I'm actually getting notified on what I want? Any other general advice, words of wisdom, or links to good reads on SCOM 2012 also appreciated, thanks.
|
# ? Nov 8, 2012 17:50 |
|
Anybody know how DFS redirects users to the correct location? Our domain has the same name as an external domain we have, our network admin deleted the records pointing the name to the domain controllers. While \\domain sends us to the external domain \\domain\namespace still works correctly. I thought deleting the records would break DFS but it did not. We thought there was some other DNS shenanigans going on but did not find any records for the namespace. Edit: That was easy, I should have looked harder. If anybody else has this dumb and easy to answer question go to your DFS share and open the properties on on of the folders shared out with DFS. There will be a DFS tab in there that shows the share locations in a referral list and will show the active server being used for the client. I assume it uses Sites and Locations to determine which location will be active by default. The namespace works the same way, DFS creates a shared folder on the server hosting the namespace, you'll find the same DFS tab showing the server hosting the namespace. Presumably DFS causes the redirect when you type in \\domain\namespace since it still works if your domain is not resolving to the server hosting the namespace. Yaos fucked around with this message at 19:44 on Nov 8, 2012 |
# ? Nov 8, 2012 18:52 |
|
I'm using MDT to deploy images. How can I get the LTI wizard thing to prompt me to manually enter a PC name? I dont get asked and I end up with a PC called 'Network-DAFD466' Google turns up a bunch of info relating to SCCM which I am not using.
|
# ? Nov 13, 2012 12:27 |
|
Swink posted:I'm using MDT to deploy images. How can I get the LTI wizard thing to prompt me to manually enter a PC name? I dont get asked and I end up with a PC called 'Network-DAFD466' Try putting SkipComputerName=No in your Rules section.
|
# ? Nov 13, 2012 13:30 |
|
Swink posted:I'm using MDT to deploy images. How can I get the LTI wizard thing to prompt me to manually enter a PC name? I dont get asked and I end up with a PC called 'Network-DAFD466' I'm not sure how you are naming your computers, I have a computer rename script that pulls the serial number from Dells/HPs I could share with you.
|
# ? Nov 13, 2012 16:49 |
|
I have nothing. Just a customsettings.ini that sets a bunch of stuff. I purposely left out anything to do with the computername assuming that I would be prompted, instead I get assigned a random name. I tried putting in SkipComputerName=No OSDComputerName= However that doesnt work, I still get the random name. Ahh - Turns out I had SkipWizard=YES in there. Removing that line sorted me out. Swink fucked around with this message at 23:36 on Nov 13, 2012 |
# ? Nov 13, 2012 22:55 |
|
I'm trying to set up the following system with 3 Windows Server 2012 machines:
The iSCSI target server has 2 virtual drives and 2 targets independently assigned to each Hyper-V server, so each Hyper-V has a physical drive C: hosting the OS, and a D: which is the virtual drive on the iSCSI target, where the virtual machines get stored. What I want to do is have some kind of failover solution, so that if the iSCSI target server dies, the Hyper-V machines automatically switch over to an image that's been syncing to the C: Is this possible? Nahrix fucked around with this message at 17:32 on Nov 14, 2012 |
# ? Nov 14, 2012 01:41 |
|
You'd probably want to look at fail over clustering.
|
# ? Nov 15, 2012 11:08 |
|
incoherent posted:You'd probably want to look at fail over clustering. Yeah, I'm looking into that right now. Having never touched it before, I'm confused on one part. I've got the 2 Hyper-Vs in a cluster, and I can only add storage for machines that are in the cluster. So, I set up an iSCSI target to HyperV1, add its storage as a CSV, and it's all good so far. What I'm wondering is, if HyperV1 goes down, does the iSCSI target and initiator get automatically remapped on HyperV2? Edit: Figured it out. You need to assign one volume 2 targets, or n targets for n machines in the cluster. I had tried this before without CSVs, and got some really unpredictable behavior from it, so I avoided it until now. With CSV, it works like you'd expect. Nahrix fucked around with this message at 00:24 on Nov 16, 2012 |
# ? Nov 15, 2012 20:56 |
|
Don't think this is exactly the right thread for this question, but might be the closest one. Does anyone have a good suggestion for software asset management? I don't mean deployment and such, I just mean for keeping an inventory of what we have, the keys for it, when it expires, who bought it when and for what, that type of thing. Apparently where I work no one has ever done this well and it's slowly falling more and more to me. I'm not a big fan of the current method - "check this excel sheet, then this sharepoint sheet, then this excel sheet, then ask John, then ask Jane and if you still can't find the license key we probably lost it." So I'd really like to get it all in one centralized, easy to search and use location. I'm guessing their are better options than "use an excel sheet". Currently using SCCM 2012 with over 250 applications and over 2,000 computers and we have no standard process or procedure for keeping track of most of that. Thrawn200 fucked around with this message at 17:04 on Nov 19, 2012 |
# ? Nov 19, 2012 17:02 |
|
I've been using a Google Docs (we have Google for EDU) that I share with staff, but it only has 4 items so that's not really helpful I guess. I would say the best might some kind of shared spreadsheet, and then require some management buy in that makes people responsible for keeping it up to date.
|
# ? Nov 19, 2012 21:35 |
|
FISHMANPET posted:I've been using a Google Docs (we have Google for EDU) that I share with staff, but it only has 4 items so that's not really helpful I guess. We do use KeyServer already for some of our Adobe products and such, going to look more into possibly using that since we already pay for it and have it setup. Might open up some more nice options of better tracking usage and stuff as well.
|
# ? Nov 20, 2012 18:52 |
|
Spiceworks has software license management in it. Its a little rough, but it gets better each release and its free.
|
# ? Nov 20, 2012 19:11 |
|
Thrawn200 posted:We do use KeyServer already for some of our Adobe products and such, going to look more into possibly using that since we already pay for it and have it setup. Might open up some more nice options of better tracking usage and stuff as well. We use Secret Server and it's quite good. We bought it for use as a password database, but can also hold product keys. It lets you set expiry dates on keys, configure alerting emails, audit access to individual passwords and is fully AD integrated.
|
# ? Nov 21, 2012 01:18 |
|
Disaster recovery is kind of a clusterfuck for the company I work for right now, and I'm doing my damnedest to fix it. Unfortunately I'm more code monkey and less system admin, and was thrown into the role of system admin a year ago when our admin quit on us. Currently, our backup solution is a Linux box running Bacula to do incremental and file-level backups with easy restoration. I set that up a while ago, because it was a quick and easy solution to get us to the point we could restore files that someone in the office hosed up and go a version or two back. It's great for small disasters, but I'm trying to make a "holy poo poo our entire rack/server room has gone to poo poo" level backup solution as well. In addition I want to make it easy to make true offsite backups. First step is trying to backup our main file server/active directory server since it's the most critical. I got Windows Server Backup and command line tools installed. I was able to make a schedule to do our company fileshare once a night, and it worked great last night. Now I want to do a weekly bare metal backup that will get the whole Windows install (2008 R2), but I can't schedule a second backup using the GUI. This leads me to wbadmin.exe and using the command line tools. I want to command line the "Full server" backup option that I can choose when going to the "backup once" option in the GUI, but there doesn't seem to be a command line option that just does it. What I've managed to come up with is this: wbadmin.exe -backupTarget:\\nas\share\folder\servername\ -include:c:\,e:\,f:\,p:\ -vssFull -systemState -allCritical -quiet Is that right? Will that get everything? Or am I better off using the GUI to schedule a weekly full bacup, and using wbadmin.exe to just backup the F:\ and E:\ drives?
|
# ? Nov 21, 2012 16:31 |
|
Frozen-Solid posted:Is that right? Will that get everything? Or am I better off using the GUI to schedule a weekly full bacup, and using wbadmin.exe to just backup the F:\ and E:\ drives? That should get everything on C:, E:, F:, P: as well as critical system volumes and the system state using the 'Full' volume shadow copy which will reset the backup flag of the files Nebulis01 fucked around with this message at 18:11 on Nov 21, 2012 |
# ? Nov 21, 2012 18:07 |
|
Nebulis01 posted:That should get everything on C:, E:, F:, P: as well as critical system volumes and the system state using the 'Full' volume shadow copy which will reset the backup flag of the files Which should be everything to restore if we lose the whole thing, right? (those are all the drives, minus the CDROM) Though, I'm now wondering if vssCopy should be set instead. I have no idea if Bacula checks the backup flag or has it's own way of dealing with incremental backups.
|
# ? Nov 21, 2012 19:03 |
|
Frozen-Solid posted:Which should be everything to restore if we lose the whole thing, right? (those are all the drives, minus the CDROM) That will provide you with a file that will let you do a bare metal recovery using Windows Server Backup. I have no idea on the backup flag + bacula though
|
# ? Nov 21, 2012 19:23 |
|
Here comes my very own hard-drive failure story: An old server whose services had been mostly migrated to a new server, basic AD stuff, died recently. While it once had (Windows) RAID 1, that failed a long time ago and was never rebuilt. Most of the data was on DFS so users are mostly happy. Unfortunately, the FSMO roles were on it and when I try to do anything in DFS on the new server, it gives me errors about not being able to contact the domain. So, I'd like to revive it. I've used the RAID utility on the Silicon Image SATA card to try and copy the drive to the spare should've-been-RAID drive but it failed at just under 90%. I can browse both drives using Runtime.org Knoppix boot CD but when I try to boot off the new drive, although it starts to boot Windows (now on the mobo SATA), that fails, and Server's CD repair option can't see the installation. Diskpart sees the drive as invalid and doesn't let me go as far as selecting the partitions (which it can see) and its recover command doesn't work. So, does anyone know how I can remove this invalid marker so I can repair the Windows installation and gracefully demote the DC? I hadn't bothered with backups because the main server is backed up and this one was really just there for redundancy. alanthecat fucked around with this message at 18:06 on Nov 23, 2012 |
# ? Nov 23, 2012 17:51 |
|
It might be less effort to forcibly remove the DC from the domain and seize the FSMO roles onto another DC.
|
# ? Nov 24, 2012 01:41 |
|
So I work for an MSP with a bunch of clients that have their own individual domains. We'd like to set up certain services for them (one of the things we sell is colocation and virtualization), but need to centralize things. Is the best solution to this to set up trusts between their domains and our master domain? That's what I was thinking, but I'm worried it could get too complicated.
|
# ? Nov 24, 2012 01:48 |
|
Powdered Toast Man posted:So I work for an MSP with a bunch of clients that have their own individual domains. We'd like to set up certain services for them (one of the things we sell is colocation and virtualization), but need to centralize things. Is the best solution to this to set up trusts between their domains and our master domain? That's what I was thinking, but I'm worried it could get too complicated. Can you give a more specific example? ADFS is good for stuff like this but it's not trivial to setup.
|
# ? Nov 26, 2012 18:22 |
|
Everybody is asking complicated questions and here I am asking a dumb one. We're getting Active Directory set up on a 2008 domain, part of that is getting home folders created. We would like to have the home folders match the user's logon name, however the %username% variable only uses the pre-windows 2000 logon name which has a limit of 20 characters. While most usernames will fall under this, we have a few that are cut off. Does anybody know how to get the user's full username and not just the pre-windows 2000 name? More important, why is this not an optional field? Yaos fucked around with this message at 18:37 on Nov 26, 2012 |
# ? Nov 26, 2012 18:26 |
|
You cannot create user accounts that have the same pre-2000 user name - even if they do differentiate afterwards - so this shouldn't be a problem.
|
# ? Nov 26, 2012 20:08 |
|
skipdogg posted:Can you give a more specific example? ADFS is good for stuff like this but it's not trivial to setup. Specifically we want to set up a WSUS server, which requires poo poo like GPOs to enforce it.
|
# ? Nov 26, 2012 23:20 |
|
Does anybody have to deal with cloud apps like Dropbox in their enviroment? We're a University, so a lot of researchers have their Dropbox full of research papers and stuff. It's not private data so we can't get them to stop on those grounds. I'm also not really sure I want them to stop. The problem is apparently that we don't have the storage space for people to have their Dropbox data stored in their roaming profile. It's not really possible to block the dropbox.exe by hash, because I'm guessing it changes a lot, and I don't want to play that game with the users, so what's a good way to "manage" the use of Dropbox?
|
# ? Nov 30, 2012 21:59 |
|
AppLocker to stop Dropbox being used, or just exclude Dropbox from the roaming profile. Doesn't stop someone setting their Documents folder as the Dropbox one I suppose, it's a tough nut to crack.
|
# ? Nov 30, 2012 22:16 |
|
I'm able to block it at the firewall through Sonicwall's AppControl. It works pretty well. This is a app-level block and includes three different Dropbox app signatures. If some people are deemed "more equal" its very easy to configure exceptions to the rule.
|
# ? Nov 30, 2012 22:24 |
|
I don't want to block it though, that's the problem. I'm thinking preventing the folder from roaming would be a big help. But we've also got student labs where nobody uses the same computer twice, so every time they'd log in they'd be syncing a pile of data. And that is a tough nut to crack.
|
# ? Nov 30, 2012 22:26 |
|
Prevent it from roaming, then have a GPO applied to lab PCs to block it with AppLocker? That's the best you can do I think.
|
# ? Nov 30, 2012 22:42 |
|
FISHMANPET posted:I don't want to block it though, that's the problem. I'm thinking preventing the folder from roaming would be a big help. I would start with a default "block off dropbox"; dropbox does have a webclient so they aren't completely SOL. I'd then double back and for, say, certain faculty computers, and have a specific opt-in group that reanbles access for a custom install/settings that puts the sync folder somewhere else (local drive, other network storage). People who opt-in promise to not change their dropbox folder, share new folders, and never sync anything in their profile.
|
# ? Dec 1, 2012 19:07 |
|
Does anyone use Microsoft Search Server Express 2010? When you search files, it opens them with the file:// protocol which means any browser that isn't IE isn't going to open the links.
|
# ? Dec 5, 2012 20:46 |
|
Whats the best way to "seed" a DFS share? I bought a server that's going to our DR site and will eventually be the central DFS target for 3 offices. The biggest data copy will be from here in our home office. Should I just do a robocopy and then set up the DFS share once the server is racked up at the site?
|
# ? Dec 6, 2012 16:10 |
|
What's your topology, and size of the replicated folder(s)? You can use robocopy, but it will still have to rehash all the files when you bring the new folder in to the environment, touch timestamps, etc to get everything in sync. If you could bring it up at the home office, get it added and let it do the initial sync, that would probably be easiest depending on how fast the link is between there and your DR site. This article might be helpful: http://blogs.technet.com/b/askds/archive/2010/09/07/replacing-dfsr-member-hardware-or-os-part-2-pre-seeding.aspx
|
# ? Dec 6, 2012 16:21 |
|
devmd01 posted:What's your topology, and size of the replicated folder(s)? The main folder is about 200 gigs. The new server will be going to our DR site in Philly, site here in NYC. When it's in, we'll be adding VPN connections to all of our offices (additional in NYC, LA, London, Paris). There is a different subnet in Philly so I'll be assigned an IP by the company who manages the racks. We have a 20 meg metro ethernet here in NYC, and I'm not 100% sure how much bandwidth we have in Philly but apparently it's a "lot" according to their tech. Thanks for the link, I'll do some reading. E: Both servers are 2008R2, and yes, I can get it set up here in the office before I do the move. Matt Zerella fucked around with this message at 17:00 on Dec 6, 2012 |
# ? Dec 6, 2012 16:58 |
|
|
# ? May 21, 2024 15:09 |
|
LmaoTheKid posted:The main folder is about 200 gigs. That article spells out the process pretty comprehensively. Just don't do what my Operations team did and try a normal copy, you will end up with every single file being compared, found to be different and moved into the conflicts folder. On a 2Mbit Satellite connection that wasn't particularly fun.
|
# ? Dec 8, 2012 07:20 |