Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ragzilla
Sep 9, 2005
don't ask me, i only work here


Powercrazy posted:

Change miles to meters and this is correct.

1m = 5ns propagation delay, 300km = 3ms delay.

Serialization should be insignificant at at 1Gb.

-edit-
Whoops this was round trip, madsushi below has 1 way figures

ragzilla fucked around with this message at 23:26 on Aug 15, 2013

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Bluecobra posted:

I am not sure why you think such a short run would have that latency. This is the host-to-host round-trip latency on one of our 1Gb DWDM circuits between two data centers that is 50 miles apart:

Speed of light is about 300 km/ms in a vacuum, or 200 km/ms in fiber, which would mean 600km for 3ms of latency, which is like 370 miles.

ate shit on live tv
Feb 15, 2004

by Azathoth
Yea I messed up my units. Should have said kilometers. Anyway, short answer is simulating latency is hard to do, especially at Gbps or greater.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Bluecobra posted:

I am not sure why you think such a short run would have that latency. This is the host-to-host round-trip latency on one of our 1Gb DWDM circuits between two data centers that is 50 miles apart:

code:
PING foo: 1500 data bytes
1508 bytes from foo (10.X.X.X): icmp_seq=0. time=0.755 ms
1508 bytes from foo (10.X.X.X): icmp_seq=1. time=0.717 ms
1508 bytes from foo (10.X.X.X): icmp_seq=2. time=0.717 ms
1508 bytes from foo (10.X.X.X): icmp_seq=3. time=0.732 ms
1508 bytes from foo (10.X.X.X): icmp_seq=4. time=0.716 ms
1508 bytes from foo (10.X.X.X): icmp_seq=5. time=0.713 ms
1508 bytes from foo (10.X.X.X): icmp_seq=6. time=0.720 ms
1508 bytes from foo (10.X.X.X): icmp_seq=7. time=0.712 ms
1508 bytes from foo (10.X.X.X): icmp_seq=8. time=0.731 ms
1508 bytes from foo (10.X.X.X): icmp_seq=9. time=0.711 ms

----foo PING Statistics----
10 packets transmitted, 10 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.711/0.7224/0.755/0.0136
Agreed. I just checked and for me to go 35 miles through a carriers metro Ethernet network (so probably a few extra miles and a couple of switches in between) I get about 1.5ms on the same subnet.

nzspambot
Mar 26, 2010

Bluecobra posted:

I am not sure why you think such a short run would have that latency. This is the host-to-host round-trip latency on one of our 1Gb DWDM circuits between two data centers that is 50 miles apart:

code:
PING foo: 1500 data bytes
1508 bytes from foo (10.X.X.X): icmp_seq=0. time=0.755 ms
1508 bytes from foo (10.X.X.X): icmp_seq=1. time=0.717 ms
1508 bytes from foo (10.X.X.X): icmp_seq=2. time=0.717 ms
1508 bytes from foo (10.X.X.X): icmp_seq=3. time=0.732 ms
1508 bytes from foo (10.X.X.X): icmp_seq=4. time=0.716 ms
1508 bytes from foo (10.X.X.X): icmp_seq=5. time=0.713 ms
1508 bytes from foo (10.X.X.X): icmp_seq=6. time=0.720 ms
1508 bytes from foo (10.X.X.X): icmp_seq=7. time=0.712 ms
1508 bytes from foo (10.X.X.X): icmp_seq=8. time=0.731 ms
1508 bytes from foo (10.X.X.X): icmp_seq=9. time=0.711 ms

----foo PING Statistics----
10 packets transmitted, 10 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.711/0.7224/0.755/0.0136

lol here is 2.1km (10Ge DF)

code:

ping 172.16.255.254
PING 172.16.255.254 (172.16.255.254) 56(84) bytes of data.
64 bytes from 172.16.255.254: icmp_req=1 ttl=254 time=7.13 ms
64 bytes from 172.16.255.254: icmp_req=2 ttl=254 time=1.78 ms
64 bytes from 172.16.255.254: icmp_req=3 ttl=254 time=3.17 ms
64 bytes from 172.16.255.254: icmp_req=4 ttl=254 time=3.83 ms
64 bytes from 172.16.255.254: icmp_req=5 ttl=254 time=2.45 ms
64 bytes from 172.16.255.254: icmp_req=6 ttl=254 time=2.46 ms
64 bytes from 172.16.255.254: icmp_req=7 ttl=254 time=2.62 ms
64 bytes from 172.16.255.254: icmp_req=8 ttl=254 time=2.87 ms
64 bytes from 172.16.255.254: icmp_req=9 ttl=254 time=2.74 ms
64 bytes from 172.16.255.254: icmp_req=10 ttl=254 time=2.57 ms
64 bytes from 172.16.255.254: icmp_req=11 ttl=254 time=1.90 ms
64 bytes from 172.16.255.254: icmp_req=12 ttl=254 time=2.79 ms
64 bytes from 172.16.255.254: icmp_req=13 ttl=254 time=2.32 ms
^C
--- 172.16.255.254 ping statistics ---
13 packets transmitted, 13 received, 0% packet loss, time 12067ms
rtt min/avg/max/mdev = 1.788/2.975/7.133/1.300 ms

prob thos FPOS HP switches or dodgy GFA SPFs that people before me brought

nzspambot fucked around with this message at 01:38 on Aug 16, 2013

ragzilla
Sep 9, 2005
don't ask me, i only work here


Bluecobra posted:

I am not sure why you think such a short run would have that latency. This is the host-to-host round-trip latency on one of our 1Gb DWDM circuits between two data centers that is 50 miles apart:

code:
round-trip (ms)  min/avg/max/stddev = 0.711/0.7224/0.755/0.0136

I hope your DR policy doesn't say 50mi, because that's only ~46mi based on .7224ms rtt

361.2us one way latency, ~204m/us @ 1.4682 refractive index (smf-28) = 73684m or 45.79mi. Probably even less than that even due to ser/deser time (minimal), switching latency (who knows), and time for the hosts to respond/process.

nzspambot if you're pinging the switch instead of a host, you're going to get bad results from the slow processor in it.

Flash z0rdon
Aug 11, 2013

:goonsay:

nzspambot
Mar 26, 2010

ragzilla posted:



nzspambot if you're pinging the switch instead of a host, you're going to get bad results from the slow processor in it.

I get the same results from pinging each hop along the way (switch -> FW -> VM)

nzspambot fucked around with this message at 03:42 on Aug 16, 2013

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

ragzilla posted:

I hope your DR policy doesn't say 50mi, because that's only ~46mi based on .7224ms rtt

361.2us one way latency, ~204m/us @ 1.4682 refractive index (smf-28) = 73684m or 45.79mi. Probably even less than that even due to ser/deser time (minimal), switching latency (who knows), and time for the hosts to respond/process.

nzspambot if you're pinging the switch instead of a host, you're going to get bad results from the slow processor in it.

That's probably correct. The driving directions are about ~37 miles apart so I just rounded up to 50 miles. Also, the switch latency isn't that bad. The host on one end is plugged into a Force10 S4810 which is about 900ns, and the other end is plugged into a 6509 w/Sup 720 with PFC3 (about 20 microseconds) and that is in turned plugged into another S4810 with 900ns latency.

workape
Jul 23, 2002

Anyone wanting to work with Multicast on a Nexus 5k running L3 would be better off shooting themselves in the face than attempting to dig through the damned caveats to find all the wonderful oops that can happen when PIM interacts with vPC. Jesus christ.....

Honestly Cisco if you don't want people to run L3 on the 5ks, stop loving selling the solution.

Flash z0rdon
Aug 11, 2013

5ks are poo poo.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Does anyone have any Notepad++ or Sublime Text (or any) Cisco syntax highlighting files? I have been using one designed a long time ago, and doesn't seem to cover any of the NX-OS commands and only like 50% of the IOS commands. Anyone have a user-defined language file to share?

less than three
Aug 9, 2007



Fallen Rib

madsushi posted:

Does anyone have any Notepad++ or Sublime Text (or any) Cisco syntax highlighting files? I have been using one designed a long time ago, and doesn't seem to cover any of the NX-OS commands and only like 50% of the IOS commands. Anyone have a user-defined language file to share?

I made one, but it doesn't have NX-OS and was designed around my personal color theme, so probably doesn't work well. But sure! I'll grab it Monday.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Flash z0rdon posted:

5ks are poo poo.

We love ours (layer 2 for life)

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Flash z0rdon posted:

5ks are poo poo.

They're decent access layer devices since you can get FC and ethernet in one box. Layer 3 on them is horrible though.

Flash z0rdon
Aug 11, 2013

1000101 posted:

They're decent access layer devices since you can get FC and ethernet in one box.

Oh yeah forgot about this. :o: Storage likes them.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

adorai posted:

We love ours (layer 2 for life)

Yep, we said gently caress layer 3 on ours. Been great

CrazyLittle
Sep 11, 2001





Clapping Larry

1000101 posted:

They're decent access layer devices since you can get FC and ethernet in one box. Layer 3 on them is horrible though.

Why is there such a huge push to put layer 3 on switches? (or higher?)

Tasty Wheat
Jul 18, 2012

CrazyLittle posted:

Why is there such a huge push to put layer 3 on switches? (or higher?)

Wire speed is faster. Can depend on your network design. If you have an rear end-ton of VLANs, something has to provide layer 3 between.

abigserve
Sep 13, 2009

this is a better avatar than what I had before

adorai posted:

We love ours (layer 2 for life)

Yeah never had any problems here either. If you want layer 3 services, don't use a switch designed for layer 2.

abigserve fucked around with this message at 11:03 on Aug 18, 2013

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Tasty Wheat posted:

Wire speed is faster. Can depend on your network design. If you have an rear end-ton of VLANs, something has to provide layer 3 between.
We use vyatta in VMware trunked to a pair of Nexus 5k switches. It routes 10gbe no problem. Best of all, it's free.

CrazyLittle
Sep 11, 2001





Clapping Larry

adorai posted:

We use vyatta in VMware trunked to a pair of Nexus 5k switches. It routes 10gbe no problem. Best of all, it's free.

Yeah, that's pretty much what I'm getting at. Shame on me for not saying it outright, but why is there a push to put layer 3 on a switch when a router can handle those tasks better?

ruro
Apr 30, 2003

CrazyLittle posted:

Yeah, that's pretty much what I'm getting at. Shame on me for not saying it outright, but why is there a push to put layer 3 on a switch when a router can handle those tasks better?

What do you mean push? Layer 3 has been on switches for ages.

abigserve
Sep 13, 2009

this is a better avatar than what I had before
The basic reason is you don't want to purchase additional hardware. If you've got some nexuses already, the simplest design would be to use them to route as well instead of buying at least one 6500 (two if you need redundancy).

Knowledge of the product of course, tells us that this can be tricky. I should also mention I've never heard of any issue with L3 routing on the nexus platform unless VPC is involved somewhere - i.e a N7K with dual everything's seems to handle all the standard L3 tasks fine.

This is of course not taking into account HPC style solutions where it's a routed edge to get around spanning-tree issues (I'm looking at you, extreme),

workape
Jul 23, 2002

For low levels of unicast traffic the 5k is ok, but start mixing vPC, multicast, and any failover routing and you've got an RPF check powderkeg waiting to explode in your face. I've used them multiple times for small scale routing at edge sites where I was looking for a L2/3 switch with fiber channel capabilities. Worked great for that.

CrazyLittle
Sep 11, 2001





Clapping Larry

ruro posted:

What do you mean push? Layer 3 has been on switches for ages.

I'm not seeing the added value of duct-taping a lackluster routing feature onto the side of a decent wire-speed switch. Sure I get running routing logic on power switches with routing processor cards, but trying to do BGP/OSPF for 100mbit of traffic on a 3750? Why not just use the switch as a switch and use a router to route traffic?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

abigserve posted:

The basic reason is you don't want to purchase additional hardware. If you've got some nexuses already, the simplest design would be to use them to route as well instead of buying at least one 6500 (two if you need redundancy).

Knowledge of the product of course, tells us that this can be tricky. I should also mention I've never heard of any issue with L3 routing on the nexus platform unless VPC is involved somewhere - i.e a N7K with dual everything's seems to handle all the standard L3 tasks fine.

This is of course not taking into account HPC style solutions where it's a routed edge to get around spanning-tree issues (I'm looking at you, extreme),

I'm doing routed-edge designs a lot more frequently now coupled with VXLAN. Bit of the best of both worlds and can keep the VMware people happy.

edit: clarity

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

CrazyLittle posted:

I'm not seeing the added value of duct-taping a lackluster routing feature onto the side of a decent wire-speed switch. Sure I get running routing logic on power switches with routing processor cards, but trying to do BGP/OSPF for 100mbit of traffic on a 3750? Why not just use the switch as a switch and use a router to route traffic?
I can't speak for the 3750, but I haven't had any routing performance issues on Force10/Arista switches and that includes 1/10Gb connections. We don't have many cabinets at remote data centers so two 48-port 1U 10GbE switches are enough to over our WAN/LAN connectivity. It doesn't make any sense to buy more equipment that will increase power/maintenance costs.

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
Man, now that I have a cursory understanding of IPv6 from studying for the 640-816 exam, I want to be some sort of "IPv6 is the way of the future!" evangelist.

Pretty cool system, that.

ior
Nov 21, 2003

What's a fuckass?

CrazyLittle posted:

I'm not seeing the added value of duct-taping a lackluster routing feature onto the side of a decent wire-speed switch. Sure I get running routing logic on power switches with routing processor cards, but trying to do BGP/OSPF for 100mbit of traffic on a 3750? Why not just use the switch as a switch and use a router to route traffic?

What? The 3750 does L3 just as fast as it does L2. However the control-plane/fib wont scale as high as a 'real' router. Not really a problem unless you are clueless and try to put a full BGP table on it.

SamDabbers
May 26, 2003



QPZIL posted:

"IPv6 is the way of the future!"

Join us in the futurepresent! Get a tunnel and experience the IPv6 Internet for yourself! Also, "IPv6 Evangelist" is a real job, apparently.

Herv
Mar 24, 2005

Soiled Meat

QPZIL posted:

Man, now that I have a cursory understanding of IPv6 from studying for the 640-816 exam, I want to be some sort of "IPv6 is the way of the future!" evangelist.

Pretty cool system, that.

Should have capped it at a 64 bit address space, max. 48 is even better.

There's a reason or two its been sitting for what 10 years?

I know we need all the IP4 bolt-ons to be integrated but gently caress, say 2 to the 128'th in english. Overkill in my opinion.

SamDabbers
May 26, 2003



Why would it be better to cap it at 64 or 48 bits instead of 128? The whole point was to make it such a large address space that we'd never have to deal with the address exhaustion problem again, at least on this planet. Also, it allows for some cool features like mapping a 48-bit MAC address into the 64-bit host portion for auto-addressing within a subnet. You couldn't do that as easily with a 64 or 48 bit address.

Herv
Mar 24, 2005

Soiled Meat
Here is the answer to the problem: compute the exponential expression: 2 to the 64th

2 to the 64th = 18446744073709551616, that is to say 18,446,744,073,709,551,616 is read in English as:

"eighteen quintillion, four hundred fourty-six quadrillion, seven hundred fourty-four trillion,
seventy-four billion, seven hundred nine million, five hundred fifty-one thousand, six hundred sixteen."


e: Oh yah, and a loopback address that doesn't consume an entire 'class A' net.

Herv fucked around with this message at 16:22 on Aug 19, 2013

wolrah
May 8, 2006
what?

SamDabbers posted:

Why would it be better to cap it at 64 or 48 bits instead of 128? The whole point was to make it such a large address space that we'd never have to deal with the address exhaustion problem again, at least on this planet. Also, it allows for some cool features like mapping a 48-bit MAC address into the 64-bit host portion for auto-addressing within a subnet. You couldn't do that as easily with a 64 or 48 bit address.

There are two main reasons for wanting a shorter address space.

The first is entirely human. Most of us who deal with a lot of IPs end up remembering a bunch of them. An IPv4 address is pretty easy in that way, it's only slightly more complicated than a phone number and nicely grouped. This also helps in communicating them via speech, be it in person, over the phone, etc. IPv6 pretty much makes DNS a requirement for meatbag-friendly operation.

I mean just think about 10.0.0.15 vs. fd00:0000:0000::15. IPv6 addresses don't really get simpler than that and can be a lot more complex. Technically since those two sets are all zeros they can also be compacted in to the ::, but officially the fd00::/8 range is supposed to have all those zeroes as 40 bits of random, so you'd legitimately need to remember at least the amount I listed in most cases.

That said, it's not like being forced to do DNS right is truly a bad thing, but to those who have been happily doing without for years it can be a challenge to deal with.


The second is routers. Big routers use specialized chips for everything to do things as fast as they do. Those chips only have so much memory, as in some cases the kind of memory best suited for the job is expensive. More address space = more routes to deal with = greater memory requirements = greater cost. Multiply that by the scale of large providers and its not hard to see why so many have waited until the absolute last minute to do anything.


edit: This should not be taken as me arguing against IPv6 or it being 128 bits, I really don't care what the answer is as long the end result gets NAT the gently caress out of my life. Since IPv6 is currently the only viable option for that, even with its arguable "flaws" I have to support it.

edit2: Now that I'm not at a customer site that blocks Wiki, here's a link for anyone interested in the expensive memory in big routers/switches: http://en.wikipedia.org/wiki/Content-addressable_memory

Basically it's memory that the CPU can ask "is this value here?" rather than having to search the entire thing looking for it.

Herv posted:

Yep, even if we have 6 octets in an 'IP v7 address' the first 5 would stay rather static, with a padded out mask.


10.10.10.10.10.200 (Host)

255.255.255.255.255.0 (Mask)

10.10.10.10.10.254 (GW)

That gives us a measly 281 Trillion address (2 to the 48th) to squeeze by with until we develop whatever the gently caress we use in 50 years. :)

That's thinking about it in a rather narrow way. Obviously private addresses will stay rather boring in general, but I only used those for the direct comparison with as simple of an address as they get. The internal-only addresses I used really don't have much of a point for most users in IPv6, link-local is already there for local communication and its so easy to get more IPv6 space than you'd ever need so any actual managed IPv6 networks should usually be using the real stuff unless it will absolutely never ever need internet connectivity.

For a more real-world comparison, how about google.com, which returns 74.125.225.67 (among others) for IPv4 but 2607:f8b0:4009:803::1000 for IPv6. Neither are "padded out", but the IPv6 one is a lot less human-friendly.

Regarding the number of addresses, keep in mind that a large chunk of IPv4 space is "wasted" due to the logistics of routing traffic in logical ways without blowing up routing tables (see TCAM problem above). IPv6 will be the same way, so having more addresses than we could ever need multiple times over allows the space to be divided in logical ways while still practically guaranteeing that we won't find ourselves 15 years down the road with some new space crunch requiring us to start the whole process over again. Since memory chips and memory buses tend to have power-of-two sizes/widths, 64 and 128 are the two logical steps up from 32 bits.

Why not 64 bit then? My best guess is something to do with wanting to be able to use the 48 bit MAC address as a supposedly unique number for automatic addressing schemes, but that's just a thought off the top of my head and not based in any actual knowledge.

wolrah fucked around with this message at 21:49 on Aug 19, 2013

Herv
Mar 24, 2005

Soiled Meat
Yep, even if we have 6 octets in an 'IP v7 address' the first 5 would stay rather static, with a padded out mask.


10.10.10.10.10.200 (Host)

255.255.255.255.255.0 (Mask)

10.10.10.10.10.254 (GW)

That gives us a measly 281 Trillion address (2 to the 48th) to squeeze by with until we develop whatever the gently caress we use in 50 years. :)


Still standing by my 48 bit proposal.

I read DNS and Bind in the 90's but still... gently caress 128 bits in it's ear for a technical solution.

I want IPv7 with a 48 bit address field.

psydude
Apr 1, 2008

Are any of you running MetroE through AT&T in the DC area? I'm trying to get a general idea of pricing, but their website is very coy and I'm not allowed to talk to sales because I'm a contractor.

e: We already have an ingress, which I know makes a big difference.

psydude fucked around with this message at 19:29 on Aug 19, 2013

H.R. Paperstacks
May 1, 2006

This is America
My president is black
and my Lambo is blue

psydude posted:

I'm not allowed to talk to sales because I'm a contractor.

What?!@! DoD would never buy anything if I wasn't allowed to talk pricing with vendors.

ruro
Apr 30, 2003

wolrah posted:

There are two main reasons for wanting a shorter address space.

The first is entirely human. Most of us who deal with a lot of IPs end up remembering a bunch of them. An IPv4 address is pretty easy in that way, it's only slightly more complicated than a phone number and nicely grouped. This also helps in communicating them via speech, be it in person, over the phone, etc. IPv6 pretty much makes DNS a requirement for meatbag-friendly operation.
Either you have a great memory for numbers or I have a terrible one. I can generally remember down to region and perhaps site if it's an important one :(. The internal draft IPv6 addressing standard they have where I work at the moment is easier for me to remember than the IPv4 standard as there are sufficient 'fields' to use for pertinent information, e.g.: <routing prefix>:site/cust id:building:level:level-net:host or <routing prefix>:dc num:cust id:cust-net:device-type:host.

TCAM/CAM utilisation may be an issue for service providers, but I'd be surprised if that was an issue for anybody else.

Adbot
ADBOT LOVES YOU

Herv
Mar 24, 2005

Soiled Meat

wolrah posted:


edit: This should not be taken as me arguing against IPv6 or it being 128 bits, I really don't care what the answer is as long the end result gets NAT the gently caress out of my life. Since IPv6 is currently the only viable option for that, even with its arguable "flaws" I have to support it.

Believe it or not, we used to not NAT.

I was using a 10 net for the ubiquity.

What's probably clouding my judgement is that I have been at this for about 20 years, so I was a first hand witness to the bitness getting over allocated. I wouldn't call myself a Network Engineer these days. I'm just an old fart IT Director that used to be a Network Engineer.

The first time I was asked to do NAT was in '96 or so, on a 'PIX Classic' that was already in production. It was firewalling (ok, ok, packet filtering) but not NAT'ing. There were publicly routable addresses to the desktops, and this was for a MS Gold Partner.

It was the same at Bell Labs in the 90's too (6500 folks, one huge facility, PC and a SPARC on each desktop... where I cut my teeth thank god), routable addresses, but not NAT'd.

It seems like everything has been crafted around address exhaustion since the late 90's and for good reason.

OK, all bitness aside, how efficient do you see the skeleton to usable IP addresses play out? 50 percent loss to the network addressing vs host addresses?

Take my 48 and 64 bit fields and split them up however inefficient you want to get, that's still some BIG math waiting for us to fill up. Keep routable addressing, but lose the NAT. 140 Trillion for 48, 14 Quintillion for 64 bits.

Every house gets a fuggin Class A net (I know classes are deprecated looong ago)

Having said that, the issue of all the bolt-ons have to be addressed (poo poo) but once again, RFC1918 was based on exhaustion no?

e: Added the cut in half metrics for 48 and 64 bits.

Herv fucked around with this message at 23:16 on Aug 19, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply