Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
phosdex
Dec 16, 2005

I'm still running truenas core. Def past the point of any janitoring needed. I forgot what scale was so I looked at the website and I don't need any of that.

Adbot
ADBOT LOVES YOU

Korean Boomhauer
Sep 4, 2008

rufius posted:

Can you elaborate? I’m genuinely curious.

I’m running a bunch of the *arrs, nzbget, a couple of custom docker apps, and I think handbrake. I haven’t run into issues.

But I see people being pissed about truecharts or whatever regularly.

I could have been a bit more concise hah. The truenas apps side is mostly fine, it's weird that its kubernetes based but doesn't have many of the kubernetes features, but its fine. Most of my beef is from the truecharts side.

bit of an unfocused rant but:

When Scale first introduced apps, people kinda felt like that whole thing was lacking. A few people put their heads together and came up with their own ecosystem in order to make it easier to add way more apps and bring in a few more features. That's TrueCharts. They seem to like breaking the whole thing once or twice a year for seemingly no reason, and sometimes the fixes or changes the end users have to do aren't reliable or don't work and you have to reinstall a bunch of your stuff. You have to check their discord server daily to see if they've broken anything and have to make changes, or else things go downhill quickly. With many apps, you don't have access to make changes under the hood, or change some settings with the app. The maintainer and support staff are extremely hostile to anyone asking for help or questions. The whole thing is stressful. I myself haven't run into either, but sitting in their channel and watching the maintainer tell someone "I updated nextcloud and it won't start, and I don't wanna lose any data" get told by the maintainer "sucks to be you" gets very draining very fast. You can't just google any issues either, because TrueCharts is more or less proprietary. You can't shim something for docker or kubernetes into TrueCharts because it'll most likely cause your app to explode, and the support people in their discord call you a moron for using google (while telling someone else ''have you ever heard of a little thing called GOOGLE?!?'').

I'm in the process of putting all of my poo poo into docker in a VM inside of proxmox and eventually moving my Scale install into a VM as well.

phosdex posted:

I'm still running truenas core. Def past the point of any janitoring needed. I forgot what scale was so I looked at the website and I don't need any of that.

My understanding is eventually they're going to want everyone on Scale at some point in the future, and that Core is in maintenance mode, but I didn't do any digging to see.

Korean Boomhauer fucked around with this message at 15:36 on Apr 25, 2024

power crystals
Jun 6, 2007

Who wants a belly rub??

Korean Boomhauer posted:

When Scale first introduced apps, people kinda felt like that whole thing was lacking. A few people put their heads together and came up with their own ecosystem in order to make it easier to add way more apps and bring in a few more features. That's TrueCharts. They seem to like breaking the whole thing once or twice a year for seemingly no reason, and sometimes the fixes or changes the end users have to do aren't reliable or don't work and you have to reinstall a bunch of your stuff. You have to check their discord server daily to see if they've broken anything and have to make changes, or else things go downhill quickly. With many apps, you don't have access to make changes under the hood, or change some settings with the app. The maintainer and support staff are extremely hostile to anyone asking for help or questions. The whole thing is stressful. I myself haven't run into either, but sitting in their channel and watching the maintainer tell someone "I updated nextcloud and it won't start, and I don't wanna lose any data" get told by the maintainer "sucks to be you" gets very draining very fast. You can't just google any issues either, because TrueCharts is more or less proprietary. You can't shim something for docker or kubernetes into TrueCharts because it'll most likely cause your app to explode, and the support people in their discord call you a moron for using google (while telling someone else ''have you ever heard of a little thing called GOOGLE?!?'').

I'm in the process of putting all of my poo poo into docker in a VM inside of proxmox and eventually moving my Scale install into a VM as well.

I'll echo this. I switched to scale two years ago (from windows storage spaces :v:) and truecharts seemed like a great system for adding whatever apps to the system because gently caress dealing with docker et al manually and there's like literally 10 things that ix themselves maintain. What I got instead was two different rounds of "we broke everything and your only option is to reinstall". The first one of those was incredibly stupid, where if you used the default "PVC (simple)" storage that just went away and you had to swap to just "PVC", but seemingly the only change was that that one had a quota for disk usage and simple didn't. Great. Except that you couldn't set a smaller quota than the previous at any point, and since simple was set to "unlimited", every option was smaller. So there was no way to migrate. After that one I decided I wouldn't use PVC, it was easier to just stick everything in host path storage despite that being very much what the maintainers don't want you doing.

The second one was something about a GPU selection dropdown that didn't have data in existing apps, so you had to reinstall them? That one pissed me off enough my solution was "just don't bother upgrading" for like a year. I assume I missed at least one more instance of this happening in the meantime.

And as above if you even think about asking for support, just look in a mirror and shout "go gently caress yourself". It'll be faster and an equivalent experience. Which is great when all the upgrade/install failed errors have zero attempt to explain what actually happened, you just get some raw python error that means nothing unless you're an actual maintainer.

The "discovery" UI in the 2023 release is also an absolute disaster, where if you search for a term you get five results sorted by who knows what, and a "view all" link that isn't all results, just, all. Everything in that category. The app management UI also has this amazing feature where it caches data in the client, so every time you navigate there you have to refresh to see the actual status of the apps. This leads to the amazing situation where you get a notification that there's an update, go to the apps page, and everything says up to date until you press F5. I have no idea who decided any of that was a good idea, it worked fantastically in the previous versions. I haven't tried upgrading to dragonfin or whatever the 2024 version is called yet, maybe they made that less awful.

I like the NAS part of truenas quite a bit but the apps ecosystem is a tire fire.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.
That definitely resembles my experience early on with Scale. But, at least for me, the last year has smoothed out a lot.

Maybe I’ll jinx it saying that.

Korean Boomhauer
Sep 4, 2008
You'll be fine if you're using just the apps IX offers (and whatever you setup with their custom app stuff).

power crystals posted:

I haven't tried upgrading to dragonfin or whatever the 2024 version is called yet

if you're still on truesharts, there's a whole lot of poo poo you have to do to make this upgrade work without blowing up all your apps. I still ended up losing around five apps or so. Their documentation around this is incomplete as gently caress. The entire reason this blew up is their insistence on using PVC for everything. I didn't end up actually losing data because I've been down this path with them before and I made backups of everything, so my loss is just time setting up all these apps again (in docker in a vm instead.)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The issue I have with Unraid is the inability to modify the base system. I could live with having to redo it every update, but it doesn't even let me. I have a bash script that does that for TrueNAS. I essentially want NVMe-oF for high performance block I/O over the network. And I can't have that on Unraid. There's this obnoxious plugin system, but gently caress that one. If TrueCharts/k8s charts is poo poo, so is this. That said, it also doesn't even have native iSCSI support. Some NAS software this Unraid.

Combat Pretzel fucked around with this message at 17:44 on Apr 25, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Combat Pretzel posted:

I essentially want NVMe-oF for high performance block I/O over the network. And I can't have that on Unraid.

This is pretty drat niche in my opinion, given that the primary usage case for both is still spinning disk for big cheap bulk storage.

Real world with cheap-ish flash, how much lower latency is NVMeoF vs iSCSI? How fast is your networking?!

Korean Boomhauer
Sep 4, 2008
I can't speak to unraid, or your specific usecase, but TrueNAS is fine if you essentially use it like a pure NAS product and ignore the (third party) apps/kubernetes/VM stuff that's stitched onto it. I'm not sure truecharts would have anything approximating NVMe-oF, truecharts is more along the lines of wanting to run other *arr apps or like a minecraft server.

E: I kinda passively googled around and it sounds like this shooould be achievable in proxmox. ymmv and you might wanna pull some threads yourself to see if it fits your specific knees, but its an option.

Korean Boomhauer fucked around with this message at 18:22 on Apr 25, 2024

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Twerk from Home posted:

This is pretty drat niche in my opinion, given that the primary usage case for both is still spinning disk for big cheap bulk storage.

Real world with cheap-ish flash, how much lower latency is NVMeoF vs iSCSI? How fast is your networking?!
I have disks in a mirror for pseudo "dual actuator", backed by an 512GB L2ARC on an Gen3 NVMe drive and 52GB of ARC. If the caches are hot, I get SSD-like performance over the network (playing games stored on the NAS).

I have 40GBe Mellanox cards in my desktop and my NAS. I'm specifically using NVMe-oF, because that's the only way to get RDMA, thanks to the Starwind initiator on Windows and Linux' nvmet+nvmet-rdma kernel modules.

As far as latency, I don't have hard numbers. I just know that last time I benchmarked it with DiskMark, in the worst metric, i.e. single threaded random 4KB reads, iSCSI crap out at 34MB/s, whereas it gets up to 90MB/s using NVMe-oF. That says enough. Forgot what 32 threaded sequential was at, but it was beyond 3GB/s.

Either way, if Unraid would just let me gently caress around with the base system, that'd be fine. Then I could replicate it.

Inept
Jul 8, 2003

Any opinions on a decent 4 bay DAS? It seems like they all have compromises. I don't need particularly fast speed, I just don't want the thing to overheat or the controller to die within a year.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

withoutclass posted:

pkg works great tbh. pkg install Plex. Boom now you got Plex.

I keep Plex installed as a backup but gave up on the rest of their apps. What a weird collection of apps they have.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Folks who play games off of a NAS drive how’s that working for you? Any particular hardware or network minimum that needs to be met to notice a difference or anything on the gaming computer side that does?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
You definitely need high performance for the streaming assets type of games, see my earlier post what that implies. For level loading games, they’ll take somewhat longer to load on 1/2.5Gbit Ethernet. More so, if you solely rely on spinning rust (forget streaming assets in that scenario, the very least for more recent games).

evil_bunnY
Apr 2, 2003

Nolgthorn posted:

It looks like I made a little bit of a mistake this is the product I was actually looking at. Which wipes out a lot of the cost savings I was talking about. It's the one that has expandable memory and 4 extra nvme slots and so on. It's the one all the reviewers focus on.
4xNVMe but no 10GBE is some special blend of stupid.

Mr Shiny Pants
Nov 12, 2012

Corb3t posted:

This thread is almost 50/50 TrueNAS and Unraid users at this point with some Synology users peppered in - Definitely give Unraid a try.

I just use a default Ubuntu server installation with ZFS and Docker..... Works a treat.

BlankSystemDaemon
Mar 13, 2009



I just use plain FreeBSD with zfs, bhyve, and jails - it works great, because it isn't limited to what the appliance makers has decided the appliance should be capable of, and equally importantly what they've decided they won't support.

Computer viking
May 30, 2011
Now with less breakage.

evil_bunnY posted:

4xNVMe but no 10GBE is some special blend of stupid.

Being able to serve a solid 1gbit with a home-friendly number of spinning drives is surprisingly hard. Long reads or writes, sure, but anything with smaller (or heaven forbid, mixed) reads or writes can be "cheap USB stick" levels of slow.

Of course it would absolutely not hurt to have a 10gbit port - or even a 2.5 - but nvme-only 1 gbit will still feel a lot faster than spinning-disk-only 1 gbit for certain loads.

Computer viking fucked around with this message at 17:00 on Apr 26, 2024

Thanks Ants
May 21, 2004

#essereFerrari


Doesn't that Celeron only have 8 PCIe lanes in total though

MadFriarAvelyn
Sep 25, 2007

I almost wonder if going with Ubuntu would be the way to go for my build. It's the Linux distro I'm most familiar with and looking up some guides setting up ZFS doesn't sound too terrible to do.

I know when I set up my pool I'll probably want to use one or more of my four drives as parity drives just in case a disk fails. So that means...RAID-Z or RAID-Z2? Any preferences between them?

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
Wondering if Synology is going to update their DS423+ to perhaps have more than 1GBs or come with more ram for example? From what I can tell they're well overdue for a keynote thing.

Aware
Nov 18, 2003
I think if you've not done it before it's a great learning exercise to build from scratch but at the end of the day I use most of this poo poo for work and don't want to janitor at home/happy with a functional UI.

Computer viking
May 30, 2011
Now with less breakage.

MadFriarAvelyn posted:

I almost wonder if going with Ubuntu would be the way to go for my build. It's the Linux distro I'm most familiar with and looking up some guides setting up ZFS doesn't sound too terrible to do.

I know when I set up my pool I'll probably want to use one or more of my four drives as parity drives just in case a disk fails. So that means...RAID-Z or RAID-Z2? Any preferences between them?

The ZFS Raid-Z levels don't do dedicated parity drives, they spread the data and parity blocks evenly across all the drives - otherwise the parity drive(s) would see more traffic than the rest. You can think of it as writing data in groups of as many blocks as there are drives, and the number is how many of those are parity. So for a six-drive RAID-Z2, it would be writing groups of four data blocks and two parity blocks. (RAID-Z is RAID-Z1.)

In practice with your four disks, this means that RAID-Z1 would be 3+1, and RAID-Z2 would be 2+2. You could get the same 50% available space with two mirrors, though it would have different tradeoffs - it would probably be a bit faster, it's quicker to rebuild, but losing two disks could take out the pool if they're in the same mirror. (If you have multiple vdevs, writes are balanced across them based on their free space percentage - so ideally two mirrors would be similar to a RAID10.)

I'd probably do Z1.

Computer viking fucked around with this message at 23:49 on Apr 26, 2024

movax
Aug 30, 2008

BlankSystemDaemon posted:

For special and log vdevs, you're better off sticking to mirroring, since raidz1 adds padding and is way more affected by fragmentation if you ever get to the point of having low amounts of contiguous LBAs of free space.
You can N-way mirroring, though - so for example, instead of mirroring across 2 disks, you can mirror across more than two; that way, you can lose two disks and still not lose the pool.

Also, it's worth mentioning that SAS can be daisy-chained up to 5 SAS enclosures per external SAS port - but unfortunately this is only available for real SAS enclosures.
For DIY, the best you can do is find a way to supply power to a SAS expander in the UNAS case, then buy SAS expanders to reduce the number of cables going from/to each device.

Minor nit: The ZFS Intent Log is part of the on-disk specification - when you add a slog device, you're really adding a separate log device to be used instead of the ZIL.

Thanks -- I may look at adding just one drive then so the redundancy is matched, cheaper that way as well vs buying two more SATA SSDs.

On SAS, I just got more SATA Drives so I'm stuck w/ SATA... should be fine, I imagine, and with not populating a mobo / CPU / etc in the case I expect a lot more room to work + airflow. Will have to see how good / quiet the stock fans are.

How do I go about sizing the special vdev, or I guess, 'reverse' sizing it -- I'm not worried about that storage for just metadata, but if I set that recordsize/whatever parameter to say 10M, then every file smaller than 10M will actually live on the SSD mirror, right? Until it fills up... and then do I gracefully degrade to having metadata move to HDDs, and I have to basically do a copy off/back on to fix the issue?

movax fucked around with this message at 23:10 on Apr 26, 2024

BlankSystemDaemon
Mar 13, 2009



movax posted:

Thanks -- I may look at adding just one drive then so the redundancy is matched, cheaper that way as well vs buying two more SATA SSDs.

On SAS, I just got more SATA Drives so I'm stuck w/ SATA... should be fine, I imagine, and with not populating a mobo / CPU / etc in the case I expect a lot more room to work + airflow. Will have to see how good / quiet the stock fans are.

How do I go about sizing the special vdev, or I guess, 'reverse' sizing it -- I'm not worried about that storage for just metadata, but if I set that recordsize/whatever parameter to say 10M, then every file smaller than 10M will actually live on the SSD mirror, right? Until it fills up... and then do I gracefully degrade to having metadata move to HDDs, and I have to basically do a copy off/back on to fix the issue?
SAS HBAs support either the native SATA protocol or Serial ATA Tunneling Protocol if you're using SAS expanders.
So it really shouldn't be an issue, so long as you avoid some of the EMC SANs of the past, where they'd actually lock out those features until the customer paid for them.

EDIT: The biggest real difference with SAS is that you get access to the SCSI READ DEFECT DATA command, which is what we all wish S.M.A.R.T was.

BlankSystemDaemon fucked around with this message at 23:19 on Apr 26, 2024

movax
Aug 30, 2008

BlankSystemDaemon posted:

SAS HBAs support either the native SATA protocol or Serial ATA Tunneling Protocol if you're using SAS expanders.
So it really shouldn't be an issue, so long as you avoid some of the EMC SANs of the past, where they'd actually lock out those features until the customer paid for them.

EDIT: The biggest real difference with SAS is that you get access to the SCSI READ DEFECT DATA command, which is what we all wish S.M.A.R.T was.

No SAS expanders here -- just a basic SATA backplane in each NSC-810A I will cable to the LSI HBA, so not too worried about it. We will see how the refurb drivers from serverpartdeals do...

Incidentally, there's no electrical isolation... so I take it AC coupling on the SATA lines is sufficient for these external applications?

There are no more 5400 PROs on eBay so I got a PM883. With the 1:10 rule of thumb for ARC sizing, and having 64 GB of RAM, I will plan to provision the 1TB M.2 I have in the PCIe x1 slot as L2ARC of ~600GB in size or so. I'll do some more reading up on optimizing the special vdev configuration...

Computer viking
May 30, 2011
Now with less breakage.

I may have spent a lot of time today trying to figure out why an old rack server (a Dell R415) wasn't discovering any of the disks I put in it.

Apparently the backplane connector on the motherboard is just passthrough to the RAID controller card, and I've long since repurposed that. Oops. The onboard SATA controller drives ... SATA ports on the motherboard of the 1U rack server with a hotplug backplane and no SATA power cables? Weird.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
https://nas.ugreen.com/pages/ugreen-nas-storage-preheat

Ugreen (the usb hub and charging cable company) is making a foray into the NAS space. I wouldn't consider this an ad because it's all nerds in here who probably know exactly why these suck.

They appear marketed toward average consumers particularly Apple users. I am an average consumer albeit not an Apple one, so I have no idea what I'm looking at. There isn't much information and I'm concerned I wouldn't get Jellyfin running on it and who knows what else.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.
Docs say it supports Docker so it’s some sort of Linux thing.

That’s all most folks would need to get their favorite poo poo running.

I found this video about the OS, and it seems like there’s some notion of an app model but details are thin: https://youtu.be/YaLPl41LByo?si=YGZ1AkzY7YFmGFrg

TheDK
Jun 5, 2009
Synology user here. I setup babbys first docker container last night for pi-hole which was both surprisingly easy and immediately helpful!

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Nolgthorn posted:

https://nas.ugreen.com/pages/ugreen-nas-storage-preheat

Ugreen (the usb hub and charging cable company) is making a foray into the NAS space. I wouldn't consider this an ad because it's all nerds in here who probably know exactly why these suck.

They appear marketed toward average consumers particularly Apple users. I am an average consumer albeit not an Apple one, so I have no idea what I'm looking at. There isn't much information and I'm concerned I wouldn't get Jellyfin running on it and who knows what else.

People have been getting and preview/reviewing these. Seems like the hardware is fine, but the software needs work.

BlankSystemDaemon
Mar 13, 2009



Hardware for a NAS is easy to get right. Buy a low-power Intel i3 or AMD APU with proper ECC support, throw in some memory, and make sure you have enough SATA and PCIe ports/slots to fill the bays.

Software-wise, there's an unenumerable amount of ways to gently caress up.

SA Forums Poster
Oct 13, 2018

You have to PAY to post on that forum?!?
A long long time ago (20 years) I was using FreeBSD as my desktop OS. I've been using nothing but Windows ever since. For my new NAS I was going to install TrueNas, but how much more effort would it be to install FreeBSD and customize it myself? My day job is not computer touching.

Just need zfs, rtorrent, and plex

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
I think I'll go for it if I can get one soon enough. Need to wait for them to hit the market in Europe.

Aware
Nov 18, 2003

SA Forums Poster posted:

A long long time ago (20 years) I was using FreeBSD as my desktop OS. I've been using nothing but Windows ever since. For my new NAS I was going to install TrueNas, but how much more effort would it be to install FreeBSD and customize it myself? My day job is not computer touching.

Just need zfs, rtorrent, and plex

It's a perfectly good choice and not really any more difficult or easier than starting with a clean Linux install. Go for it I'd say.

Korean Boomhauer
Sep 4, 2008
Dumb question but is there anything that truenas scale does zfs-wise that i couldn't replicate with a cronjob if i were to just use zfs on proxmox and have a nfs/smb container somewhere? Like what all would I have to setup to make sure my data doesn't get exploded aside from a weekly scrub? I'm pretty sure I can view drive health right in proxmox as well.

I was originally going to do a truenas scale vm and passthrough one of the sata controllers and use that for NAS, but I wanna have some of my docker containers use some of the space on the NAS, like tubearchivist can store videos there, or romm can access roms on it (for 24 hours, after which they'll be deleted). I was reading that docker and nfs can be dicey, but nfsv4 was fine? I don't know!

Korean Boomhauer fucked around with this message at 22:42 on May 2, 2024

Yaoi Gagarin
Feb 20, 2014

aside from a weekly scrub you should set up timed snapshots. theres scripts floating around somewhere that handle all the logic like keep X daily snapshots and Y monthly snapshots etc

Korean Boomhauer
Sep 4, 2008
Oh yeah snapshots and backups are gonna be a part of my whole new setup lol. I'm currently not doing either! I'm not entirely sure how to weave snapshots and replication so it's not a certified mess. I have a lot of reading to do I guess.

Korean Boomhauer fucked around with this message at 22:48 on May 2, 2024

Gonna Send It
Jul 8, 2010
I like Sanoid/Syncoid for my simple home ZFS backup uses. I have it setup to pull from the primary server to the backup server via ssh juuuuust in case the primary is somehow compromised.

https://github.com/jimsalterjrs/sanoid

yoloer420
May 19, 2006
I'm just sorting out the HBA for my new build.

Just quickly looking on eBay I identified two cards that are very affordable:



I plan to attach 8x 18TB SATA disks, a mix of WD and Seagate, which I'll run ZFS on with Ubuntu as a host OS.

Is one of these better than the other? Should I get something else?

Please help me avoid making mistakes :(

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Korean Boomhauer posted:

Dumb question but is there anything that truenas scale does zfs-wise that i couldn't replicate with a cronjob if i were to just use zfs on proxmox and have a nfs/smb container somewhere? Like what all would I have to setup to make sure my data doesn't get exploded aside from a weekly scrub? I'm pretty sure I can view drive health right in proxmox as well.

I was originally going to do a truenas scale vm and passthrough one of the sata controllers and use that for NAS, but I wanna have some of my docker containers use some of the space on the NAS, like tubearchivist can store videos there, or romm can access roms on it (for 24 hours, after which they'll be deleted). I was reading that docker and nfs can be dicey, but nfsv4 was fine? I don't know!
Anything that an appliance OS does can typically be done by a general OS, and the general OS can do much more - it's just that you have to spend time getting it to work, whereas the appliance is supposed to be simpler.

Gonna Send It posted:

I like Sanoid/Syncoid for my simple home ZFS backup uses. I have it setup to pull from the primary server to the backup server via ssh juuuuust in case the primary is somehow compromised.

https://github.com/jimsalterjrs/sanoid
Yea, Jim Salters sanoid is the go-to-thing at this point, because it does everything right and doesn't try to gently caress with anything that isn't its own snapshots.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply