Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hadlock
Nov 9, 2004

We have a fledgling CI system for our Ruby on Rails product, Jenkins feeds Team City from Github, and also we have Bamboo now building our product in containers and deploying it to GCE.

As near as I can tell, Bamboo basically stopped development in 2012, with some bug fixes in 2013 and Atlassian hasn't touched the drat thing in years except for periodic life support bug fixes.

Bamboo will build the docker containers and deploy via shell scripts, but at that point it's just a glorified scheduler with deep hooks in to Github. It also does some testing, but all the results need to be in JUnit XML format to do anything with them

I've found an rspec to JUnit conversion library so... that should be possible.

I'm also responsible for testing the API, and integration testing at the GUI (Selenium) level before it goes out to the customer. I'm thinking of writing some shell scripts to test the API endpoints with some matching criteria, and also some Selnium Grid (dockerized selnium endpoints) monstrosity.

Nobody seems to do Selenium very well, except for it looks like... Travis CI and maaaaybe Sauce Labs? I guess everyone else has their own home-grown widget factory.

So I guess, this is sort of what I'm looking at

Bamboo builds docker container
Bamboo deploys docker container to dev cluster
Bamboo runs testing shell scripts
-Integration/UI test
-API Test
-Performance Test
-Code coverage
-Static analysis

Then most of that goes in to a series of JUnit XML files and dumped back to Bamboo
Data also gets piped in to a data analysis db

Profit? What is the hot new awesome-sauce everyone is using that's not Rainforest QA?

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

I've been selected to put our key monitoring stuff in to a unified dashboard that's going to be powered by a raspberry pi 3.

Our current monitoring collectors are,

Nagios
Datadog
Muinen
Graylog
Splunk

I'd love to feed all this crap in to something like prometheus and then report on it via grafana, and have it rotate through various dashboards but that seems like a ton of work

Option B, I guess write up one or two dashboards per service, then have the Pi browse to each board? Seems super clunky though.

What's best practice here? This org is about a decade behind so whatever is current is going to be a million times better than whatever they have now.

Googling for advice on this is useless, every dashboard company has SEO'd good modern discussion off the first couple of pages.

Hadlock
Nov 9, 2004

We'll see how long this lasts for, but for the moment, after some good proof of concept work on containerizing two of our core products, my boss is giving me carte blanche to move his company's systems out of the dark ages of managed VMs and in to the light of kubernetes on AWS.

Trying to compile a "2018-era devops starter kit", looks something like this, what would you add/delete/modify?

Hosting: AWS
Orchestration: Kubernetes
Reverse proxy: Traefik
Monitoring/Alerting: Prometheus/Grafana
Log management: Graylog
Build system: Jenkins
Secret management: Vault
Source control: Github
Release management: Coreroller

Hadlock
Nov 9, 2004

the talent deficit posted:

if you're going to use aws i'd probably use ALB instead of traefik and cloudwatch logs -> es instead of graylog. you might want to replace jenkins with codebuild too

We want to build this in AWS but also run a similar if not exact setup in our environment hosting our managed VMs, so trying to avoid complete AWS lock-in and at least keep the appearance of vendor neutral deployment. Being able to build a very small k8s inside our existing infrastructure and prove that it works there makes it a lot easier to get buy in from those that hold the purse strings to give us some additional spend to migrate to the cloud.

Punkbob posted:

Yeah I'd do nginx ingress controller instead of treafik.

It's a cool project but not super well integrated into kubernetes. To really run it you need to expose a key value store for it to hold state that nginx-ingress + kube LEGO does for you.

Edit also use kops, unless you really want to roll by hand.

And if you aren't already in AWS I'd take a real hard look at GKE instead of managing it yourself.

Re: GKS we did that at my last company and for the most part it worked out great minus some small cock-ups. The plan is to do K8s on AWS via kops as proof of concept/mvp for some greenfield projects; and then when AKS becomes avalable switch over to that. I'm a big fan of kops so far.

What do you recommend instead of Traefik and/or what is best practice here, can you link me to something good? The problem with k8s is it's almost too modular, and there's 1000 articles out there about how to use someone's pet project of the week as X module in k8s instead of just "this is best practice in 2017/8"

Hadlock fucked around with this message at 23:32 on Jan 7, 2018

Hadlock
Nov 9, 2004

Punkbob posted:

I’d go with ingress-nginx(but not the one by nginx, inc) it’s a first class project of kubernetes and has good support. I’d really go with this both on prem and in the cloud as it’s the most full featured ingress controller at the moment.

The reason why I dislike k8s and traefik is that it isn’t well integrated into k8s yet and to run it in HA you’d need to standup your own key value store(consul or etcd). It’s config syntax also doesn’t use the native ingress resource last time I checked, which is a bummer, and it had no context for kubernetes specfic resources like secrets(used to store tls keys) or configmaps.

ingress-nginx it is, then! Thanks!

Ideally we'd have something like jwilder/nginx which I've been using casually at home for years now and love the simplicity of, but I guess this isn't a whole lot more complex.

Hadlock
Nov 9, 2004

I talked my boss in to kubernetes, he sold my boss on it and got our CTO jumping out of his seat pointing at the screen in the board room; halp;

We're a 2003 era Java company, GitHub is like Jesus, kubernetes is like having a conversation with God himself. Oh God, oh God.

Our app is a fancy Java CRUD app that scales horizontally very well.

Hadlock
Nov 9, 2004

Huh we went from... no AWS, no docker, no Kubernetes to... One kubernetes cluster for ops, one kubernetes cluster for our reporting team, this month; and a third kubernetes cluster in Prod (really, a limited customer facing late-stage alpha) in mid-february. :sweatdrop:

RIght now we're using kops 1.8.x to manage/create the clusters, as my friend describes it "a high leverage tool" injecting a cluster in to an existing VPC on it's own subnet(s) seems to work, and we have some existing infrastructure-as-code tectonic stuff that a contractor sort of maintains and I think my kops stuff makes him mildly irritated but kops has an "Export as Terraform" option so... I guess I'll just do that, and then merge that in to our terraform codebase? Haven't figured that part out yet. I'd prefer to just spin these things up in their own VPC as god intended.

So, kops to deploy/maintain the cluster; nginx-ingress to handle reverse proxy; kube-lego for SSL. I've spun up prometheus and grafana but haven't had a chance to wire them up or anything. Will have to back in to RBAC, right now everything is controlled from either my user or a helm/tiller user.

Do we have a kubernetes thread yet? Or is this it?

Boss really wants to get us out of this managed server hell and in to AWS; we're using this as a beachead to get there so things are moving pretty fast.

Hadlock
Nov 9, 2004

Punkbob posted:

Edit: the team that uses kops does do the terraform export, but it’s one of the things that I don’t understand why they do it or fight so hard with it besides just really liking terraform.

I've agreed to do terraform export for my items; I don't like it a whole whole lot, but on the other hand, it's good to have your state documented in code somewhere that's readable by a third party tool.

We applied for the EKS managed kubernetes aws beta, haven't heard anything back from them yet. We have one guy using fargate to vastly simplify our QA/selenium stuff but looking at the numbers it's like 2x cost of running it in self-managed k8s. We're hoping to see EKS pricing and hope it's not insane like fargate.


Punkbob posted:

I’d switch from kube-lego to cert-manager, it’s by the same folks but is a better spin on what kube-lego does and has features like using dns verification so you don’t have to expose everything to the world.

Thanks for the suggestion I will definitely check it out. The reason why we (I) went with kube-lego is that we (I) wanted something that would work, fast, and kube-lego is old enough that it has a pretty decent body of third-party documentation. Cert-manager sounds like it may even be a candidate for going in to kubernetes incubator and so there's a good chance it'll become the de-facto solution. Right now I am really digging that to get green-lock TLS for any project, it's just two extra lines of code in the deployment.yaml.

Hadlock
Nov 9, 2004

Yeah the problem with kubernetes is that it does most everything out of the box and creating a vendor specific variant a) locks you in and b) isn't as well suported

Unless you're a bunch of data scientist incapable of cobbling together even the most basic k8s cluster using kops (going to check out kube spray) i don't see the point of getting some weird proprietary k8s cluster variant. You get k8s managed for free on GCE, aws will have managed k8s by end of year. Locking yourself in to a vendor specific solution this early in the technology lifecycle seems peculiar unless you're getting kick backs from their sales team.

Hadlock
Nov 9, 2004

We're running < 5 x t2.medium + 1 x t2.2xl (8cpu + 32gb ram) for some ridiculous stateful data cruncher app that I've massaged in to a stateless-esque service so long as you can deal with the fact that it takes 12 minutes to spin up and ingest 20gb of data. Two clusters like that, one for dev and another for UAT. Prod will likely be slightly beefier but not by much.

Probably by the end of the year we might have 10 x t2.medium or 5ish nodes that are roughly double a t2.medium. Running less than 5 nodes gives me the heebie jeebies. Our workflow isn't super dynamic, although once we onboard QA and their ridiculous selenium array it might get more exciting.

Hadlock
Nov 9, 2004

Mao Zedong Thot posted:

We run ~20 production clusters with between 500gb and 3.5tb of ram, between 5-100 nodes. We're migrating most of our capacity towards hardware, so the cluster sizes will shrink not grow even as our workload increases. We don't have any issues with the 2 or 3 clusters >80 nodes but for a variety of reasons would prefer if they were smaller clusters of higher powered machines. All told we run something like 900 services on them :monocle:

Wow.

How long have you been doing k8s, and what's your cluster management solution? How many people do you have managing one cluster, on average?

Hadlock
Nov 9, 2004

Is there a good low traffic mailing list for kubernetes

Hadlock
Nov 9, 2004

SeaborneClink posted:

Also I know this isn't the first time you've heard this. :smug:

Goddammnit

Hadlock
Nov 9, 2004

Windows container stuff is kind of hard mode.

Try spinning up a ghost blog container in a Linux VM first, figure out docker volumes, container networking, passing in env vars first before attempting anything complicated in Windows land.

Hadlock
Nov 9, 2004

Is there a thing that will scrape my Prometheus endpoints in one secure zone, and then push them to Prometheus gateway on my centralized server in another secure zone so that I'm only allowing a single connection on a single port between the two.

Hadlock
Nov 9, 2004

It sounds like Federation is what I need to do? Good link.

Hadlock
Nov 9, 2004

I would really like to see a clone of Jenkins not written in Java, plus Jenkinsfile support in a language other than groovy.

We've used bamboo and Jenkins primarily. And one guy used team city, which was just dead reliable. But 95% of everything I've seen was running Jenkins.

Hadlock
Nov 9, 2004

That's only the second time I've ever heard of Octopus Deploy in nearly five years.

poemdexter posted:

I would love for Jenkins to support the full Groovy language and not sandbox poo poo in weird ways.

Yeah next week I am diving in to building out our CD system for a single-tennant version of our multitenant product and not looking forward to this. The build engineer at my last company just figured out how to execute bash from inside jenkins-groovy and our jenkinsfiles were just huge quoted text blocks of "run this series of bash scripts". I think Jenkins has at least a ruby plugin, I would imagine there's Python support too via a plugin.

Volguus posted:

Do you have a wishlist for such a system? The language is written in I doubt matters very much, though probably support for scripts/instructions written in many languages would be a bonus.
If you can outline a set of needed features, i can guarantee that there are developers out there able and willing to implement said features, even if it would mean to fork an existing system (jenkins....).

Are you offering to build me an enterprise-grade CI system for free...?

Hadlock
Nov 9, 2004

necrobobsledder posted:

Trying to get a rough idea of what’s expected stress / responsibilities compared to others that have broader experience than myself.

Is it normal for companies to hire “devops” engineers as a hero engineer that are expected to take completely garbage, stateful, poorly documented, unautomated legacy (5 - 15 years old) software and have exactly one engineer out of 8 - 30 engineers take over most of infrastructure ownership, deployments, release management, and deliver a CI/CD pipeline in less than half a year while being on-call? I’ve talked to dozens of companies (large, small, b2c, enterprise - the full gamut) in several non-tech hubs for years and all but 3 companies seem to want / need exactly this (in veiled or not so veiled intent) while paying maybe 20% more for said engineer(s). It’s getting super old being deployment dave when I spend 30% of my time documenting and making deployments push-button easy for others and getting stuck with marching orders like Dockerizing super stateful, brittle software intended to be pushed into a K8S cluster.

This is an SRE job description, welcome to my world. In theory Google is full of SREs that work side by side with developers in complete harmony, but outside of the ivory tower it seems to be something like 1 SRE who architects/builds the system(s) described above, to a ratio of 4 "Devops" engineers at a company who do most of the toil/microconfiguration of said system.

Hadlock
Nov 9, 2004

IAmKale posted:

Are there any good guides on best practices for capturing log output from containers? For the scale of what I’m supporting, it’d be great to get a robust local logging setup. I know at some point, though, I’ll need to look at services I can use to aggregate data. For now, though, I’m more interested in higher level fundamentals to gain more confidence in Docker.

Sidecar your logs to log management like ELK, GELF, Splunk etc. Our legacy prod mission critical stuff is in Splunk right now but it costs a fortune, we hope to be 100% greylog by end of quarter.

I haven't figured out what the magic way to collect logs from kubernetes is. For stats monitoring, Prometheus is dead simple. Haven't seen a vendor agnostic zero config log solution on par with Prometheus yet.

Hadlock
Nov 9, 2004

Docjowles posted:

Yeah this is what we do (self-managed cluster on AWS built with kops). Containers write to stdout/stderr, which kubernetes redirects to /var/log/containers/ on the node. There's a daemonset running fluentd on every node. It tails all the logs and sends them to elasticsearch. Not much to it.

Yeah my last company was GKE, log management was magical with, what is it, log stash? Super easy push button, loved it.

Can you go in to more detail of what you're doing that works with your kops implementation, would love to hear more detail, that's what we're doing but it's not coming together as smoothly as you're describing.

Hadlock
Nov 9, 2004

Favorite secret store system? Our vault setup just rolled over and management doesn't trust it, also the guy who set it up didn't have any backups anywhere so looking for something else.

Hadlock
Nov 9, 2004

Hadlock posted:

Favorite secret store system? Our vault setup just rolled over and management doesn't trust it, also the guy who set it up didn't have any backups anywhere so looking for something else.

Update, consul 0.8.x apparently leaked 100gb of disk over 300 days, the guy before be did not setup any kind of disk monitoring (or it got buried in the "notifications" noise - I'm not allowed to setup an actionable alerts slack channel, pick your battles etc etc) and while vault was writing to the lease KV the encrypted string got truncated and wasn't able to be decrypted. This is not well described nor alluded to in the error messages and I fully expect my PR to be roundly ignored, but after deleting all the lease data, everything came back to life. Out of disk always fucks everything, but I expected the root key to at least be able to log in and do things. Que sera, sera

Hadlock
Nov 9, 2004

IAmKale posted:

Hey, speaking of Docker, I'm using the official Nginx Docker image via Compose to host a really simple reverse proxy. Unfortunately I'm getting 502 errors, but when I run docker-compose logs nginx nothing gets output. All of the image's logging outputs are mapped to stdout and stderr, so I was expecting even Nginx initialization logging. However there's zero output of any kind from that command. Am I doing something wrong?

Edit: it turns out I had set values for err_log and access_log in my nginx.conf, which prevented the logs from showing up in stdout and stderror :suicide:

Check out the jwilder ngnx reverse proxy container, once you realize you just have to point dns at the ip and add the env -e URL=MY.COOL-DOMAIN.COM to the docker run command, and it takes care of everything else, it's just magic, zero config. Been using it for years and it's just bullet proof.

Hadlock
Nov 9, 2004

I have a weird problem where I'm running containers on a dedicated docker host without any orchestration layer. Totally in the past I've just run jwilder's nginx auto config magic thing, but it only works for HTTP/S, the TCP plugin isn't wired up.

And now we're adding TCP ( web socket ) connections, any suggestions?

We have kubernetes elsewhere, but this is going in to a place where we can't use K8S, yet. Trying to use dns and avoid hard coding poo poo.

Hadlock
Nov 9, 2004

freeasinbeer posted:

Is it in the cloud or bare metal?

Trafeik maybe? It can do “manual” configs and can do auto configs from a bunch of different sources of truth.

Bare metal :(

We are Trojan horsing new services as containers in to our prod setup

I will take a look at Trafeik, haven't looked at it since the pre Rancher 1.0 days

Hadlock
Nov 9, 2004

Docjowles posted:

Well the idea is that it goes from one Jenkins I am responsible for to a bunch of Jenkinses individual teams are responsible for. We provide a platform and then the teams are delegated access to do what they need on it. But I take it I'm doing something very wrong here so am open to suggestions. I'm trying to do the neighborly DevOps thing here.

We get an disproportionate number of tickets requesting changes to Jenkins, upgrades, new plugins, new nodes. Everyone wants their change now. Yet if it's down for 10 seconds HipChat starts blowing up with "hey is Jenkins down for anyone else?!? Are Jerbs aren't running" comments. I want to get out of the business of managing Jenkins. Unfortunately it's also critical to the business and a shitton of jobs have built up in there over the years, so just switching to something better isn't possible overnight.

How do you all deal with this? Features of the paid Cloudbees version? Schedule a weekly maintenance window and tell people "tough poo poo, wait til Wednesday nights, and at that time the thing will be restarted so don't schedule or run stuff then"? Some other incredibly obvious thing I am missing?

You're either the Jenkins farmer of the group or you're not. Once you are the designated Jenkins farmer, if you want to get out of that role you will probably need to change companies. Once you find the one guy on the team who is willing to take Jenkins tickets with minimal complaints, you just shovel all the jenkins tickets down their throat until they choke and die, and/or quit. There is nothing less professionally fulfilling than being a Jenkins farmer.

Spending all day tomorrow setting up our first four Jenkins container pipelines at work tomorrow :toot:

Hadlock
Nov 9, 2004

Ok I have a kops Kubernetes cluster in AWS. That means it sits behind an ELB load balancer.

I have my network guy who wants to send network device logs to our graylog instance. Most of his devices only allow IP and do not have the option for DNS. This is a non-negotiable networking BS thing.

So, the idea was to setup a virtual IP (elastic IP) and point it at the ELB. Then route all incoming port 53 traffic to kubernetes and program kubernetes cluster to route to the bind container doing it's thing.

Except that you can't bind an EIP/VIP to an ELB in AWS, because gently caress you, that's why.

We're also doing a bind container in the cluster, for reasons. Both problems have the same general solution so I'll talk about that:

One option that has been floated is to spin up a new EC2 instance, run HAProxy on it and then point THAT at the ELB. That feels very crunchy though. I know that DNS instead of static IP solves lots of problems, but we still need raw static IP load balancing for this specific case.

Thoughts? Are there existing Amazon networking tools we can use to glue a static IP to my dynamic kubernetes cluster? Maybe bypassing the ELB somehow. Not sure.

I just really don't want to have to spin up a dedicated HAProxy box for this. Any sort of off the shelf AWS tools would be preferable to managing yet another named pet server.

Hadlock fucked around with this message at 01:29 on May 17, 2018

Hadlock
Nov 9, 2004

Oh can I just have Kops assign an existing ENI elastic network interface to a random node in my auto scaling group?

Option B is I write some sort of lambda script to poll the status of my ENI and reassign it to another node in the autoscaling group if it's unhealthy? Or can I have the lambda run when my ENI cloudwatch alarm goes off?

Am I over thinking this?

Hadlock
Nov 9, 2004

Vulture Culture posted:

You could use an affinity rule to pin the instance to a specific K8s worker node, then expose the service through a NodePort.

Yes but what happens when the node crashes or gets reaped 25 minutes after it boots, or in the middle of the night or whenever whoever set it up goes on vacation etc etc?

Hadlock
Nov 9, 2004

Yeah looked at network load balancer, and yes, one of the ip driven services is UDP. The lambda round robin ENI is looking pretty good at this point.

Hadlock
Nov 9, 2004

OWLS! posted:

We ended up spooling up a monstrosity involving nginx load balancers, lambdas that rebound EIPs, R53 names, etc, etc.

Janky, but it works.

gently caress UDP traffic, and gently caress amazon NLBs for not supporting it

I think we work at the same company as that's the same general solution we came up with yesterday

Found a nice docker container that just does UDP 53 load balancing using nginx, pass in the load balanced hosts as a string as a env var, running it on two hosts; lambda scrapes the autobalancer group ips and then relaunches the container on each docker host with the new env var string.

It's a loving mess.

Also super gently caress aws for not doing UDP load balancing

Hadlock
Nov 9, 2004

Hadlock posted:

Also super gently caress aws for not doing UDP load balancing

I spent like 2 hours today reading through all the kubernetes UDP-related doco, as far as I can tell SREs are allergic to UDP

Someone on Stack Exchange noted back in 2016 that docker's official documentation don't even document how to expose UDP ports when using the docker run command. That comment is still true to this day.

I found a super cool docker image that lets you run a UDP load balancer on a container with very low config:

https://hub.docker.com/r/instantlinux/udp-nginx-proxy/

Going to move it to container linux and have it boot with a launch config in an auto-scaling group of 1. Lambda will update the cloud.config file with the new backends and then nuke the node(s) in the autoscaling group... haven't figured out how to attach our singular ENI to a singular autoscaled node yet.

So far no issues. Waiting on my coworker to get done building our bind container and will put things through the wringer the next couple of weeks, this crazy DNS system will be the lynch pin for our database DR system... should be interesting.

Hadlock fucked around with this message at 10:12 on May 30, 2018

Hadlock
Nov 9, 2004

It definitely works, you need to EXPOSE 53/udp in the docker file and -p 53:53/udp in the docker daemon

Got it working yesterday afternoon, works like a charm

Hadlock
Nov 9, 2004

Also I've been running openvpn in a container on a personal server for 18+ months with no issues, so

Hadlock
Nov 9, 2004

Currently migrating from Bamboo to Jenkins because our company is completely insane, we have some goofy license that allows unlimited bamboo server installs, but then we have to pay $$$ per agent per year. So we have 16+ bamboo servers, each with 32 cores and 192GB ram and 0 agents

When we ran in to a problem (turned out we hit the linux user process limit for the bamboo user) Atlassian sent us an email something like "we have never heard of anyone ever running bamboo in this configuration, we suggest you use more agents than just the primary bamboo server"

Currently only using jenkins to run bash to build containers and deploy them. If anything needs some goofy jenkins plugin, time to add a trigger to jenkins and move that process outside of jenkins.

Hadlock
Nov 9, 2004

The Fool posted:

It's containers all the way down

We just rolled out a private dns system that's dns servers as containers, slave dns server containers at each office, aws doesn't do udp load balancers so those are containers, and then dns-exporters in containers to validate everything's working as expected. Monitored by Prometheus/Grafana which are also containers.

Hadlock
Nov 9, 2004

Grafana natively supports AWS cloudwatch as a datasource out of the box, all it needs is a read-only billing IAM key to get started. We have it on our main monitoring display and it's nice to have a visual representation of how much you spent last month vs your rate of spend this month.

We're moving pretty rapidly in to AWS from bare metal world and it's easy to leave extra poo poo on, or over-provision. We jumped from ~$7K spend to $14K spend and were able to dial it back by watching the graph. Boss man also likes it for budgeting as it demonstrates a pretty linear growth rate in the sawtooth of each month at the 6 month zoom level that the finance/budget/CFO guys like and gives everyone a pretty warm fuzzy that spending is under control

click to embiggen


https://grafana.com/dashboards/139

Hadlock
Nov 9, 2004

If Prometheus/Grafana is the open source monitoring solution

What is the log management equivalent these days

Bonus points if there's already a helm chart for it

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

Yeah people are putting GUI apps in containers now, its a thing. This is basically what "Snaps" are in the ubuntu world. X11 apps are especially easy. Just because it isn't the original purpose doesn't mean it's bad. We have a webapp, vault-ui that provides a front end for our vault server, it's pretty cool.

Ploft-shell crab posted:

I think ELK/EFK is pretty widespread, no?

I think our biggest problem with Kibana is that the LDAP login-plugin is like $1600 a year, and there are no free alternatives. Right now we have a rudimentary graylog2 install (which does support LDAP) but graylog3 is going to come out soon and I think GL2 uses an older version of elasticsearch. Looking for a better alternative.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply