|
The thread in question: https://community.spiceworks.com/topic/1739604-hyper-v-crisis-hell?page=1 Lots of weird SAN hate and poo poo like this:
|
# ? Jul 29, 2016 18:40 |
|
|
# ? May 10, 2024 03:57 |
|
Spring Heeled Jack posted:The thread in question: https://community.spiceworks.com/topic/1739604-hyper-v-crisis-hell?page=1 SANs are only useful if they have six petabytes or something? Or am I missing the point?
|
# ? Jul 29, 2016 18:45 |
|
^--- It seems the argument is that you should NOT be hosting the VMs on storage that is not internal. Apparently I need to frequest spiceworks more often for a good laugh. Along with what Nuclearmonkee said, I should call up Dell and ask why our 6 ProLiant servers don't have internal drives and only have storage controllers that connect back to my SANs. Also, I should probably yell at our architect for setting up a system that is so poorly thought out, and possibly get him fired explaining that I could do a better job with my consumer grade 5400RPM 1TB drives from amazon... MF_James fucked around with this message at 18:47 on Jul 29, 2016 |
# ? Jul 29, 2016 18:45 |
|
You're all just pissed off because you spent good money on storage that isn't necessary, when you could have hit up eBay and built some SAM-SDs.
|
# ? Jul 29, 2016 18:50 |
|
MF_James posted:Apparently I need to frequest spiceworks more often for a good laugh. 95% of the reason I browse their forums is to see freakouts about ransomware when people don't have backups, 5% miscellaneous laughs.
|
# ? Jul 29, 2016 18:50 |
|
Must refrain from going on the spicework's forum and calling everyone names.
|
# ? Jul 29, 2016 18:51 |
|
Arsten posted:
I think the poster's suggestion is that one shouldn't bother with a SAN at such a small scale. But it's stupid advice because they have no idea what the environment is and what the requirements might be. If the VM's only need a hundred gigs of local storage, then 5TB might be fine. They should focus on answering the question instead of acting like King SAN.
|
# ? Jul 29, 2016 18:51 |
|
xzzy posted:I think the poster's suggestion is that one shouldn't bother with a SAN at such a small scale. But it's stupid advice because they have no idea what the environment is and what the requirements might be. If the VM's only need a hundred gigs of local storage, then 5TB might be fine. Oh I see. Funny how a medium business with 50 users might not need Google's storage space.
|
# ? Jul 29, 2016 18:54 |
|
6TB SAN - $20,000 from VAR 6TB HD - $249 from Neweggs
|
# ? Jul 29, 2016 18:54 |
|
Bob Morales posted:6TB SAN - $20,000 from VAR Moving hard drives around to start a VM that lived on a dead server: priceless.
|
# ? Jul 29, 2016 18:59 |
|
I mean to be fair about the VMs-on-SAN post, he made that post from 1997, so give the guy a break.
|
# ? Jul 29, 2016 19:02 |
|
MF_James posted:
Buffalo TeraStation and updating your resume.
|
# ? Jul 29, 2016 19:06 |
|
Have they made up this IPOD acronym thing because I have never heard it before.
|
# ? Jul 29, 2016 19:11 |
|
18 Character Limit posted:Buffalo TeraStation and updating your resume. No poo poo, back when I was an intern at an MSP (5 years ago) they bought one to help build out their 'virtual environment'. Complete with 6+ year old HP hosts with 32gb of memory each. When I left the company it was effectively a forgotten NAS with a handful of ISO files stored on it. We had one of the drives fail while under warranty and Buffalo told us they did not have any replacement drives available at the time. I think they were WD black 2TB 7200rpm drives. Spring Heeled Jack fucked around with this message at 19:22 on Jul 29, 2016 |
# ? Jul 29, 2016 19:13 |
|
NAS? SAN? Now you're just changing letters around!
|
# ? Jul 29, 2016 19:15 |
|
Spring Heeled Jack posted:No poo poo, back when I was an intern at an MSP (5 years ago) they bought one to help build out their 'virtual environment'. Complete with 6+ year old HP hosts with 32gb of memory each. When I left the company it was effectively a forgotten NAS with a handful of ISO files stored on it. "If my calculations are correct when this baby hits 94% full, you're gonna see some serious poo poo."
|
# ? Jul 29, 2016 19:18 |
|
MC Fruit Stripe posted:I mean to be fair about the VMs-on-SAN post, he made that post from 1997, so give the guy a break. What? Both pictures show 2016 as the post date, unless this is a joke... then... Whoooooooooshhhhhh
|
# ? Jul 29, 2016 19:18 |
|
Spring Heeled Jack posted:No poo poo, back when I was an intern at an MSP (5 years ago) they bought one to help build out their 'virtual environment'. Complete with 6+ year old HP hosts with 32gb of memory each. When I left the company it was effectively a forgotten NAS with a handful of ISO files stored on it. I need to make a call asap to our architects, a couple of our VM cluster hosts only have a 32GB SD card on them to boot ESX, I'm worried.
|
# ? Jul 29, 2016 19:20 |
|
Thanks Ants posted:Have they made up this IPOD acronym thing because I have never heard it before. It's been floating around as a buzzword for a while but I've never seen it used heavily or seriously. It's basically an argument that any VM environment that relies on storage that only exists in one location is not actually high availability. These spiceworks doofuses seem to be interpreting it to mean the only HA solution is for every server to have its own internal storage.
|
# ? Jul 29, 2016 19:21 |
|
Also, someone seriously suggested turning on jumbo frames without any information on if their infrastructure supports it. Jumboframes ARE good for iscsi, but only if your whole drat infrastructure supports it, if you turn that poo poo on and your switches don't accept jumboframes? Helloooo dropped packets. Ask me about when we built our 6 VM hosts and the guy building the hosts plus guest VMs turned jumboframes on without that being part of the design nor asking if the infra would support it....
|
# ? Jul 29, 2016 19:26 |
|
MF_James posted:Also, someone seriously suggested turning on jumbo frames without any information on if their infrastructure supports it. Jumboframes ARE good for iscsi, but only if your whole drat infrastructure supports it, if you turn that poo poo on and your switches don't accept jumboframes? Helloooo dropped packets. Ask me about when we built our 6 VM hosts and the guy building the hosts plus guest VMs turned jumboframes on without that being part of the design nor asking if the infra would support it.... Even then performance gains are not huge. I have not bothered just because I don't need to and no one will notice it.
|
# ? Jul 29, 2016 19:29 |
|
MF_James posted:Also, someone seriously suggested turning on jumbo frames without any information on if their infrastructure supports it. Jumboframes ARE good for iscsi, but only if your whole drat infrastructure supports it, if you turn that poo poo on and your switches don't accept jumboframes? Helloooo dropped packets. Ask me about when we built our 6 VM hosts and the guy building the hosts plus guest VMs turned jumboframes on without that being part of the design nor asking if the infra would support it.... We had an issue with that a few years ago, there was a legitimate need for jumbo frames and the dudes implementing a project went through the channels and asked for it and were told the switches would be ready and everything would be fine. Except the networking group has no effective change management, pretty much everything is done on the fly and if someone misses a switch somewhere, well too bad we'll fix it when you find it! This triggered several months where every single performance problem on the network resulted in a ticket sent to networking asking them to verify jumbo frames were properly configured.
|
# ? Jul 29, 2016 19:32 |
MF_James posted:Also, someone seriously suggested turning on jumbo frames without any information on if their infrastructure supports it. Jumboframes ARE good for iscsi, but only if your whole drat infrastructure supports it, if you turn that poo poo on and your switches don't accept jumboframes? Helloooo dropped packets. Ask me about when we built our 6 VM hosts and the guy building the hosts plus guest VMs turned jumboframes on without that being part of the design nor asking if the infra would support it.... Unless the frames have the df bit set it will just fragment them which both cancels the benefit of jumbo frames and slightly increases the amount of work your switches are doing since they are now fragmenting poo poo that's going over the maximum MTU. On a related note, if you are testing to make sure your jumbo frames work and do this: code:
code:
Nuclearmonkee fucked around with this message at 19:36 on Jul 29, 2016 |
|
# ? Jul 29, 2016 19:33 |
|
Nuclearmonkee posted:Unless the frames have the df bit set it will just fragment them which both cancels the benefit of jumbo frames and slightly increases the amount of work your switches are doing since they are now fragmenting poo poo that's going over the maximum MTU. Well, it was not only fragmenting the data, we did wireshark our issues a shitload and we saw (just for argument) a 100000 size packet leave VM 1, and then 10 10000 size packets hit VM 2, but we were also seeing a shitload of packets getting dropped at the switches (confirmed by our network team). It could have been (and likely was) a combination of issues, we have affected broadcom NICs where we had to turn off LSO/VMQ, but still had the problem of packets getting dropped. It wasn't until we wiresharked it, saw the larger packets, checked the configuration, saw jumboframes were turned on (no one had any idea that this would be the case because it was NOT documented in the initial design our architect gave to the builder), and then changed the MTU back to normal that our lives became not hell. *edit* I also did not know about the df-bit, something new to read up on
|
# ? Jul 29, 2016 19:39 |
MF_James posted:Well, it was not only fragmenting the data, we did wireshark our issues a shitload and we saw (just for argument) a 100000 size packet leave VM 1, and then 10 10000 size packets hit VM 2, but we were also seeing a shitload of packets getting dropped at the switches (confirmed by our network team). It could have been (and likely was) a combination of issues, we have affected broadcom NICs where we had to turn off LSO/VMQ, but still had the problem of packets getting dropped. It wasn't until we wiresharked it, saw the larger packets, checked the configuration, saw jumboframes were turned on (no one had any idea that this would be the case because it was NOT documented in the initial design our architect gave to the builder), and then changed the MTU back to normal that our lives became not hell. Documentation is for cowards! All knowledge of the system should be held in random secretive people's heads.
|
|
# ? Jul 29, 2016 19:41 |
|
Some switches are just plain bad at iSCSI as well because they're slow and have tiny buffers.
|
# ? Jul 29, 2016 19:41 |
|
Nuclearmonkee posted:Documentation is for cowards! Oh hi, coworker.
|
# ? Jul 29, 2016 19:42 |
|
This is my experience of most environments with regards to hidden documentation: https://www.youtube.com/watch?v=BJmTIIpUvDQ&t=74s
|
# ? Jul 29, 2016 19:49 |
|
^--- For some reason, the first thing that came to my mind was "A JELLY DOUGHNUT"Thanks Ants posted:Some switches are just plain bad at iSCSI as well because they're slow and have tiny buffers. Thankfully our architect is not a goofus and the [s]switching[\s] all infra is decent, and the client actually listened to his recommendation, mostly, they did get 1Gb switches instead of 10Gb although there might have been :reasons: I'm not privy to. We are about to upgrade the switches though, which will be nice, going from 4 1Gb connections per host to 2 10Gb.
|
# ? Jul 29, 2016 19:50 |
|
Thanks Ants posted:Some switches are just plain bad at iSCSI as well because they're slow and have tiny buffers. Once upon a time about 15 years ago some users wanted to stick with 100mb switches because they had bigger buffers than the hot new gigabit switches that were coming out. (or maybe it was 10mb switches vs 100mb switches, I can't remember anymore) They had very specific requirements though, this system was built to read data off a particle accelerator and was designed in the 90's. The engineers actually worked the available buffer on the switch into their design to keep the stream flowing smoothly. When Cisco started selling faster switches they skimped on the buffers which probably seemed totally reasonable.. now that the network is 10 times faster, you shouldn't need as much buffer, right?? Well in this case it hosed everything up because the software on the far end of the connection was suddenly getting more data than it had been written to handle and started dropping packets (which meant data was permanently lost).
|
# ? Jul 29, 2016 19:51 |
Speaking of Buffalo NAS devices... Got a ticket I'm working with a customer where his backup jobs usually don't work. He's trying to save to a Buffalo NAS and the error returns that the location does not exist. However, if he triggers one, which fails, then triggers another 3-5 minutes later, it works. I'm like 99% sure this stupid piece of poo poo Buffalo is going to sleep. The wake-on-LAN kicks in, but the Buffalo doesn't wake before the backup process gives up. But it does wake, so the next attempt works. At least, that's my theory. The dude is pushing back hard. Claims to have disabled all sleep options, but I think it's sleeping anyway. Anyone ever run into something like this? I know this is the buffalo support forum.
|
|
# ? Jul 29, 2016 20:08 |
|
Nuclearmonkee posted:Documentation is for cowards! Make sure those people are also really standoffish when you ask them anything.
|
# ? Jul 29, 2016 20:08 |
|
ConfusedUs posted:Speaking of Buffalo NAS devices... Just turn power saving off so the disks don't spin down. Does the backup software support adjusting the timeout? How about a script that runs before the backup that does something like trying to list the contents of the directory and then waits a few minutes before exiting (horrible ugly bodge of a solution).
|
# ? Jul 29, 2016 20:39 |
Thanks Ants posted:Just turn power saving off so the disks don't spin down. Does the backup software support adjusting the timeout? How about a script that runs before the backup that does something like trying to list the contents of the directory and then waits a few minutes before exiting (horrible ugly bodge of a solution). According to the dude (who won't let us actually, you know, verify his claims), all sleep options and power saving and stuff is turned off. I'm not sure I believe him, frankly, simply because I can trigger a backup, watch it fail, wait about five minutes, and then have it work. Backups to local disks and other locations don't suffer from this problem, of course. Sadly the software doesn't support adjusting the timeout (which is already pretty generous at 3 minutes). However, it does support running a script before the backup...
|
|
# ? Jul 29, 2016 22:51 |
|
Had a memory stick go bad, ran MS's mem check util all night with no errors, ran memtest86+ all day with no errors. Finally decide to start pulling the sticks, find the bad one. (home computer btw) The manufacturer no longer sells single modules of that size, not going to give them my money for a whole new kit, so I bought a whole new set from another brand. It lasted four years, which isn't too bad. The rest of the components (sans GPU) are all about seven years old.
|
# ? Jul 30, 2016 03:41 |
|
ConfusedUs posted:According to the dude (who won't let us actually, you know, verify his claims), all sleep options and power saving and stuff is turned off. I was trying to help a client over the phone fix a NAT/port forwarding issue with a device we installed on their network. I controlled the server it was reaching out to over the internet, and I configured it to talk to their public IP. The device would see my traffic, and respond, but the responses never made it to my server. After a while, I ended up doing a wireshark trace on the server's firewall, and I saw the traffic returning, but coming from a different IP, so of course my firewall blocked it. Seems like they had multiple public IPs and configured inbound and outbound traffic over different IPs. I explained it to the clients IT/network guy, who was just an old TV station engineer that had a HUGE chip on his shoulder, and he just didn't get it. He kept saying "we have X device from your competitor and it works fine, and the NAT rules are the same, fix your server." but I was 100% sure it was his NAT, I'd seen that many times before. But I"m not allowed to login to his Sonicwall () and fix it. After a few hours of dealing with him on the phone, I ask if I can review the NAT works for the working device, so I can compare the two rule sets and see if they are different. The guy absolutely flips out, yells "HOLD THE HORSES ARE YOU CALLING ME A LIAR?" I stammer, "no, I'd be helpful to just see some working rules." He slams his phone down, and immediately calls my manager's cell phone. The manager was sitting right next to me, and infact was listening to the call on speaker, since it was an important client, and the other guys couldn't figure it out for days and it had been escalated to me, so he's interested in it getting it fix. The client complained, and he assures the client he'll take care of it. After the call, I assume he's just going to laugh and blow it off, since he heard the entire conversation, and I was completely reasonable and polite, but no, he jumps all over me.
|
# ? Jul 30, 2016 04:02 |
|
Jerk McJerkface posted:I was trying to help a client over the phone fix a NAT/port forwarding issue with a device we installed on their network. I controlled the server it was reaching out to over the internet, and I configured it to talk to their public IP. The device would see my traffic, and respond, but the responses never made it to my server. After a while, I ended up doing a wireshark trace on the server's firewall, and I saw the traffic returning, but coming from a different IP, so of course my firewall blocked it. Seems like they had multiple public IPs and configured inbound and outbound traffic over different IPs. I explained it to the clients IT/network guy, who was just an old TV station engineer that had a HUGE chip on his shoulder, and he just didn't get it. This is pissing me off and I don't work with you. We stand united. #techlivesmatter #blessed #prayers
|
# ? Jul 30, 2016 13:08 |
|
Jerk McJerkface posted:I was trying to help a client over the phone fix a NAT/port forwarding issue with a device we installed on their network. I controlled the server it was reaching out to over the internet, and I configured it to talk to their public IP. The device would see my traffic, and respond, but the responses never made it to my server. After a while, I ended up doing a wireshark trace on the server's firewall, and I saw the traffic returning, but coming from a different IP, so of course my firewall blocked it. Seems like they had multiple public IPs and configured inbound and outbound traffic over different IPs. I explained it to the clients IT/network guy, who was just an old TV station engineer that had a HUGE chip on his shoulder, and he just didn't get it. Congratulations! You were just promoted to scape goat for when they lose the account because of the retard who works for the customer!
|
# ? Jul 30, 2016 13:08 |
|
Arsten posted:Congratulations! You were just promoted to scape goat for when they lose the account because of the retard who works for the customer! No worries. I quit that job shortly after. It's the one I had that they gave me a promotion, took it back and gave it back to me and another guy and didn't tell both of us and wanted us to compete for it.
|
# ? Jul 30, 2016 16:55 |
|
|
# ? May 10, 2024 03:57 |
|
Jerk McJerkface posted:No worries. I quit that job shortly after. It's the one I had that they gave me a promotion, took it back and gave it back to me and another guy and didn't tell both of us and wanted us to compete for it. It could be worse! They could have had you engage in ritual combat! I wish that particular management fad would get the Ol' Yeller treatment.
|
# ? Jul 30, 2016 17:06 |