Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
luchadornado
Oct 7, 2004

A boombox is not a toy!

ELK stack/Grafana/Splunk/etc. are fine when you have your company's money to pay for things and hardware to spare, but what are the options for lightweight personal, probably even non-distributed, metrics collection and visualization?

Use case: I've got 3-4 Rust services that are producing to a Kafka topic on a single broker. I then have a single Elixir service exposing that topic as an API. This is all running on a 2 cpu virtual server that isn't for revenue or anything, it's more for fun and learning.

I'd love to have some sort of agent that aggregates the logs for a status page that lets me do super simple visualizations of error count/rate, latency, and topic lag. Looking around, I feel like I'm going to have to write something from scratch, but I can't be the first person to say "it'd be nice to have bare bones metrics for my personal projects".

edit: I think I'm leaning towards funnel -> influxdb -> grafana, but I still feel like that might be a bit overkill and would welcome other thoughts.

luchadornado fucked around with this message at 16:39 on Sep 8, 2018

Adbot
ADBOT LOVES YOU

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
Opencv question:

Setup: My dad had a stroke, and I've taken possession of his laptop to try to finish the digitial archiving of family memories. He had borrowed an 8mm capture system, but for whatever reason the captured images were not fixed location. Don't know if it was operator error or the hardware was just junk, but it's basically a movie of frames zipping upwards: https://i.imgur.com/gIEXbby.webm

I used opencv to track the sprockets and crop the resulting image to frame: https://i.imgur.com/UfkUhTk.webm

grayscale->threshold->findContours->get bounding rectangle->verify it's the right size in the right place->crop original image->save
I add a tiny border around it to stop the feature I'm looking at from touching the edge, as that creates a contour around the entire image frame.
I even handle joining two contours because there's a hair across the sprocket hole which turns it into two contours.

Project is here: https://github.com/harikattar/8mm/blob/master/8mm.py
E: It's not perfect but it's my first foray into opencv and I only spent a few hours on it so far.

Questions:
Most important: Some captures have the sprockets cut off partway. To fix it, I want to best-fit a sproket-shaped rectangle as far-left in the contour as possible, and use that as a guide. Right now I just look for an exact size, which of course fails when they're cut off.

Less important: How do I auto-adjust the thresholding to try to detect the sproket holes? I get a lot of skipped frames on the beginning of captures due to light damage making the holes much larger than they should be. The raw example shows what I mean - that initial setup shot of the beach is missing from the final video because I couldn't track it.

Completely trivial: how do I inline imgur webms? I want to post before&after to the screenshots thread.

Linear Zoetrope
Nov 28, 2011

A hero must cook
I have to randomly ensure in a variable length sequence something happens once with probability p.

Basically, I have a stream of streams:

code:
(a->b->c)->
(a->b)->
(a->b->c->d)
Under the following conditions:

  • With probability p, something must occur at a non-terminal element of the stream, else nothing happens
  • The element the event occurs at is chosen at uniform random from the non-terminal elements
  • Whatever happens must occur when that element is observed, this cannot be done after observing the whole.
  • I have no sense of when the stream will terminate, I only know it's terminated when I observe a state which communicates it's the terminal element.

This is actually a weak simulator (can't rewind) and the step is the step to take a random action on. Yes, I know this isn't how epsilon-greedy works, I'm doing something very specific for a research problem.

This isn't actually a big deal -- I know my simulator in this case lasts ~2-5 steps so I just roll a number designating the step and if it happens to be longer than the sequence I actually observe nothing happens. It's not all nice and uniformly random, but I don't really need that guarantee, it's just a nice thing to have. I'm just kind of curious if there's a way to meet those constraints.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Consider the case where p = 1.

If you choose the event to happen on the first element, you have a non-uniform distribution.

If you advance the sequence, you might run out of non-terminal elements, and then you haven't met your probability goal.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


There's probably some variation of reservoir sampling that'll do what you want.

Sedro
Dec 31, 2008

Khorne posted:

Is this prepared statement vulnerable to any SQL tomfoolery?
code:
SELECT ui.what, ui.okay, ui.hmm FROM sometable as ui WHERE ui.hmm LIKE CONCAT('%', ?, '%') AND ui.hmm LIKE CONCAT('%', ?, '%')
Doing code review on a legacy application. In the context it's being used it's fine, but this would throw up a little red flag for me if it were part of a web service or anything exposed to actual user input. For potential performance reasons at the very least.

You're safe from sql injection but you might be vulnerable to DoS with crafted inputs like '%%%%%%%%%%%%%%%%%'

Macichne Leainig
Jul 26, 2012

by VG
Not really sure where this question best fits since it's a general git workflow question.

One of our developers is concerned whenever we do a down merge from a version branch to feature branch that squash merges are bad, since when we eventually up merge the feature branch, git can't compare the commit history as accurately so you have to manually merge more.

Is this legit? I don't know much about git, but I think whatever merge tool is used makes a bigger difference? Not that it's a bad idea to have the full branch history, so I don't disagree, I just am curious if his logic has merit.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
Do we have a machine learning / ai thread hereabouts? If not, what are some good resources for starting to play around with this stuff?

Sindai
Jan 24, 2007
i want to achieve immortality through not dying

Protocol7 posted:

Not really sure where this question best fits since it's a general git workflow question.

One of our developers is concerned whenever we do a down merge from a version branch to feature branch that squash merges are bad, since when we eventually up merge the feature branch, git can't compare the commit history as accurately so you have to manually merge more.

Is this legit? I don't know much about git, but I think whatever merge tool is used makes a bigger difference? Not that it's a bad idea to have the full branch history, so I don't disagree, I just am curious if his logic has merit.
Merge tool shouldn't matter since this is about git itself.

I haven't run into that situation but the second answer on this seems to explain in detail why it's a bad idea: https://stackoverflow.com/questions/36985798/squashed-commits-onto-feature-branch-and-cant-have-a-clean-merge-back-to-master

When you squash into the feature branch you're breaking the link between the two and giving them two different commit histories, which confuses git.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Karate Bastard posted:

Do we have a machine learning / ai thread hereabouts? If not, what are some good resources for starting to play around with this stuff?

There's a data science thread over in SAL but nothing that's dedicated to machine learning.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
Thanks!

Btw am following a web course that keeps recommending to prototype your machine learning stuff in Octave before implementing your production code in eg Javer, which tickles my bugfuck eye twitching nerve something fierce but probably does not warrant its own thread.

Java

E: also there's an onscreen transcript that keeps spelling it "Optive", and claiming that variables are "valued" when the guy obviously says "discrete valued" like what the arse man

Karate Bastard fucked around with this message at 09:28 on Sep 19, 2018

Nippashish
Nov 2, 2005

Let me see you dance!

This was pretty popular in 5 or 6 years years ago, but thankfully most of the world has moved on to python.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
What about Julia?

huhu
Feb 24, 2006
If I were you, I'd stop watching a course recommending outdated tools.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
Oh they're not recommending Julia, that was just me being curious about some other thing that was the hype back when I was sort of involved with this stuff last, like what became of it? Is anyone using it?

Yeah about that course, says Stanford on the tin. Feels pretty basic nonetheless, but I guess they're building momentum as they go.

Would you happen to have any other recommendations?

Volguus
Mar 3, 2009

huhu posted:

If I were you, I'd stop watching a course recommending outdated tools.

I doubt that the recommended tools have anything to do with anything here. He's trying to learn about ML and a big part of it is understanding the math behind it. Octave works perfectly fine. Yes, python comes nowadays with very powerful ML and math libraries, and he will need to work with them in the future, but the math is still the same. Whether you're doing it on paper, in Octave, Mathematica or python is irrelevant.
Now, if the math presented in the course is bad/wrong/irrelevant that's a different issue.
If I were you, I'd stop giving lovely advice.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
If I know one thing about Octave it's that I hate Matlab, so I'm happy to learn some :)

I've used a bit of numpy back in the day and I seem to recall massive suckage. Is it any better now or should I just git gud / use something better?

I've tried to learn R but I've discovered that I hate that too :xd: Is there any happiness down that road or is python the way to go?

pangstrom
Jan 25, 2003

Wedge Regret
General case, just start with numpy yeah. It's the best "if you only know 1..." option.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Karate Bastard posted:

If I know one thing about Octave it's that I hate Matlab, so I'm happy to learn some :)

I've used a bit of numpy back in the day and I seem to recall massive suckage. Is it any better now or should I just git gud / use something better?

I've tried to learn R but I've discovered that I hate that too :xd: Is there any happiness down that road or is python the way to go?

numpy is very good

Star War Sex Parrot
Oct 2, 2003

Karate Bastard posted:

I've tried to learn R but I've discovered that I hate that too :xd: Is there any happiness down that road or is python the way to go?
Use Python.

huhu
Feb 24, 2006

Volguus posted:

I doubt that the recommended tools have anything to do with anything here.
You're going to have a much better experience watching a course with the tools you're going to use after you finish learning.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Karate Bastard posted:

I've tried to learn R but I've discovered that I hate that too :xd: Is there any happiness down that road or is python the way to go?

R is a terrible programming language but it has the best facilities for exploratory data analysis by far. There are also a lot of statistical methods that are only available in R.

Karate Bastard
Jul 31, 2007
Probation
Can't post for 6 hours!
Soiled Meat
Python is my favorite hammer.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Does anyone have any good resources for the pros and cons of monorepos in regards to microservices. We are about to start this journey but right out of the gate we are really undecided if mono or multi is the way to go. I've found a bunch of random blog posts on the subject but was hoping for something a bit more meaty and something that looks beyond just the developer experience.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Mega Comrade posted:

Does anyone have any good resources for the pros and cons of monorepos in regards to microservices. We are about to start this journey but right out of the gate we are really undecided if mono or multi is the way to go. I've found a bunch of random blog posts on the subject but was hoping for something a bit more meaty and something that looks beyond just the developer experience.

They’re git repos for a bunch of code no one but developers will touch. How far beyond the developer experience do you want to look?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

leper khan posted:

They’re git repos for a bunch of code no one but developers will touch. How far beyond the developer experience do you want to look?

DevOps mostly. A lot of monorepo proponents don't mention versioning, build pipelines or how they handle things like git history rewriting.
A lot of proponents I see seem boil down to 'I can just pull it all and do my feature and push without thinking to much so it is better'.

I know there is no right answer for this question, its very much a 'what suits your culture better' but it's going to be a bitch to change direction once we start so I want to make at least an informed decision so in 6 months when I realise I chose the wrong one I know at least I tried.

Mega Comrade fucked around with this message at 12:17 on Sep 20, 2018

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you
I feel bad about the whole Data Science thing because as a math grad who also passed a really hard finance professional exam you'd think I would have a much better chance at getting a job doing that stuff because of the whole statistics thing, but I'm already so far down the rabbit holes of web dev and game dev that as I sit in traffic with everyone else trying to find a job in those industries I'm just rubber necking the Data Science lane watching everybody else go by. I'm tempted to want to try and switch lanes to join them, but I imagine it'll be like in the beginning of Office Space where as soon as I have a portfolio of decent projects under my belt the traffic will be backed up just like it was when I finally got some decent web dev and game dev projects in my portfolio. So instead I'm just sitting here stuck in the gridlock waiting for my chance to get an offer watching other stuff go by and feeling bad about everything telling myself that maybe it's just better this way.

Love Stole the Day fucked around with this message at 12:32 on Sep 20, 2018

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

Mega Comrade posted:

DevOps mostly. A lot of monorepo proponents don't mention versioning, build pipelines or how they handle things like git history rewriting.
A lot of proponents I see seem boil down to 'I can just pull it all and do my feature and push without thinking to much so it is better'.

I know there is no right answer for this question, its very much a 'what suits your culture better' but it's going to be a bitch to change direction once we start so I want to make at least an informed decision so in 6 months when I realise I chose the wrong one I know at least I tried.

Glad this was what you were looking for. Monorepos are terrible for automating targeted actions unless you have some really deep and dark CI magic that identifies the programmatic subunit of the repo where the most recent changeset occured. This means that anything you run against the repo in your CI environment will take longer and will exercise code that hasn't changed.

That may be fine early on, but it can be frustrating for developers when they can't get a passing check in order to merge their PR because it's queued up behind 3 other checks and a build from an hour before. That's probably fine if you have per-sprint, per-month, or per-quarter releases but not so great if you do continuous delivery.

Basically you should be asking yourself the following:
1. What are you attempting to accomplish by putting the microservices together versus keeping them apart?
2. How mature and well developed is your CI pipeline?
3. How frequent are your releases?

luchadornado
Oct 7, 2004

A boombox is not a toy!

Mega Comrade posted:

git history rewriting

That sounds like a bad thing that you don't want? But anyways on to your question...

Start Fowler's take on microservices: https://martinfowler.com/articles/microservices.html

Then just search Hacker News for either "microservice" or "monolith" and read a few comment trees from a handful of stories to get some thoughts about both sides. https://segment.com/blog/goodbye-microservices/ was a recent one that was interesting. Bear in mind that monorepo doesn't mean monolith, and you can leverage some of the benefits of both by deploying microservices out of a monorepo, your CI/CD just get a little more complicated. Probably not as useful to your situation, but reading about how Google handles its monorepo can be eye-opening.

quote:

but it's going to be a bitch to change direction

The goal is that it shouldn't be too much of a bitch, IMO. If you can't break up a repo a little or consolidate a few microservices without a death march, that's the problem. Because even really, really good devs never get it right the first time. When in doubt I've defaulted to monolith for MVP, break up a little for better productivity/deployment, and keep iterating from there. I'll jump right to the microservices on certain things that I always end up breaking out. It also really helps if you're a shop that looks at any message delivery as a place where Kafka should be used - then your microservices are just things that read from one topic and spit out to another topic.

quote:

That may be fine early on, but it can be frustrating for developers when they can't get a passing check in order to merge their PR because it's queued up behind 3 other checks and a build from an hour before. That's probably fine if you have per-sprint, per-month, or per-quarter releases but not so great if you do continuous delivery.

100 times this. If your CI/CD is held up like this, you need to break things up more. Developer productivity is almost always the end goal. Unless you're designing pacemakers or something I suppose. I'm guessing you're not.

luchadornado fucked around with this message at 12:43 on Sep 20, 2018

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Helicity posted:

That sounds like a bad thing that you don't want? But anyways on to your question...

Sure but sometimes things go wrong and it needs to be done. With a small project and a few devs it's not too painful, with a monorepo and dozens of devs I can see it being a days task.

luchadornado
Oct 7, 2004

A boombox is not a toy!

Mega Comrade posted:

Sure but sometimes things go wrong and it needs to be done. With a small project and a few devs it's not too painful, with a monorepo and dozens of devs I can see it being a days task.

git is a DAG. directed acyclic graph. The whole point is to always move forward.

edit: is this a common thing for others? I've been at 5 companies ranging from struggling startup to massive enterprise and I've had to do this 2-3 times in total. I guess I'm trying to say that fighting against the way git works probably shouldn't be a decision in how you structure your code base and architecture, and hopefully not a routine occurrence.

luchadornado fucked around with this message at 13:12 on Sep 20, 2018

Volguus
Mar 3, 2009

Helicity posted:

git is a DAG. directed acyclic graph. The whole point is to always move forward.

edit: is this a common thing for others? I've been at 5 companies ranging from struggling startup to massive enterprise and I've had to do this 2-3 times in total. I guess I'm trying to say that fighting against the way git works probably shouldn't be a decision in how you structure your code base and architecture, and hopefully not a routine occurrence.

No, rewriting git history is not a common thing, normally. There was (still is?), however, a fad going on that advocates doing exactly that on a daily basis. That is, the workflow comes and says: instead of using git properly (you know, create branches for the poo poo you're working on, make sure you're alone in said branch), how about you do your work-in-progress commits (the little ones, that we do all the time) in a "main enough" branch that everyone is working on, and when you're done, just rewrite history (push --force) to make the git history look nicer.
And oh, here's how you deal with the inevitable shitstorm that will happen once you obliterate your coworker's work.

It's hosed up, but it exists. On this very forum i saw people advocating that.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Love Stole the Day posted:

I feel bad about the whole Data Science thing because as a math grad who also passed a really hard finance professional exam you'd think I would have a much better chance at getting a job doing that stuff because of the whole statistics thing, but I'm already so far down the rabbit holes of web dev and game dev that as I sit in traffic with everyone else trying to find a job in those industries I'm just rubber necking the Data Science lane watching everybody else go by. I'm tempted to want to try and switch lanes to join them, but I imagine it'll be like in the beginning of Office Space where as soon as I have a portfolio of decent projects under my belt the traffic will be backed up just like it was when I finally got some decent web dev and game dev projects in my portfolio. So instead I'm just sitting here stuck in the gridlock waiting for my chance to get an offer watching other stuff go by and feeling bad about everything telling myself that maybe it's just better this way.

The number of people who can handle web apps and the statistics side of things is small. That might be a good niche to explore in the hopes of finding a job that you like better than the one you might get if you just do web apps.

Karate Bastard posted:

Python is my favorite hammer.

Mine is machine learning.

Dren
Jan 5, 2001

Pillbug

Volguus posted:

No, rewriting git history is not a common thing, normally. There was (still is?), however, a fad going on that advocates doing exactly that on a daily basis. That is, the workflow comes and says: instead of using git properly (you know, create branches for the poo poo you're working on, make sure you're alone in said branch), how about you do your work-in-progress commits (the little ones, that we do all the time) in a "main enough" branch that everyone is working on, and when you're done, just rewrite history (push --force) to make the git history look nicer.
And oh, here's how you deal with the inevitable shitstorm that will happen once you obliterate your coworker's work.

It's hosed up, but it exists. On this very forum i saw people advocating that.

what why

The phabricator workflow has you do a feature branch that gets squashed prior to merging the PR. I've used that, it's fine and good but the rewrite happens just before the final merge and subsequent deletion of the feature branch therefore it does not ruin everything.

Dren
Jan 5, 2001

Pillbug

Helicity posted:

Then just search Hacker News for either "microservice" or "monolith" and read a few comment trees from a handful of stories to get some thoughts about both sides. https://segment.com/blog/goodbye-microservices/ was a recent one that was interesting. Bear in mind that monorepo doesn't mean monolith, and you can leverage some of the benefits of both by deploying microservices out of a monorepo, your CI/CD just get a little more complicated. Probably not as useful to your situation, but reading about how Google handles its monorepo can be eye-opening.

I'm not convinced those guys were focused on the right problems in deciding to move from microservices to a monolith. If they had done their resilient test suite improvement and committed to keeping every service running only up to date versions of their shared code they could have reaped the benefits they attribute to the monolith architecture without drawbacks of fault isolation (this seems like a huge problem to me) and loss of caching efficiencies. They mentioned that moving to a monolith helped them with auto-scaling their services but I'm not quite sure how so it's hard to say whether there was a workable microservice solution there. They probably also needed to rethink how they did their deploys because I don't quite understand why this was a problem:

quote:

However, a new problem began to arise. Testing and deploying changes to these shared libraries impacted all of our destinations. It began to require considerable time and effort to maintain. Making changes to improve our libraries, knowing we’d have to test and deploy dozens of services, was a risky proposition. When pressed for time, engineers would only include the updated versions of these libraries on a single destination’s codebase.
Like, don't changes to the shared libraries under the monolith architecture also require a deploy + testing all of the services? The monolith moves from dozens of deploys to a single deploy but the test burden is the same. And if the devops work they've done is at all sensible deploying dozens of services should be push-button just like deploying a single service. Like, fix the testing infrastructure to not take hours to run, fix the deploy process to be push button, force the services to only use the latest shared libs, and... done?

luchadornado
Oct 7, 2004

A boombox is not a toy!

Volguus posted:

It's hosed up, but it exists. On this very forum i saw people advocating that.

That is so goddamn bad. I'm usually OK saying "whatever works" but people recommending that are seriously bad. Work on branches, and rebase or squash before merging into master or whatever your "main" branch is. That keeps the log clean.

Dren posted:

I'm not convinced those guys were focused on the right problems in deciding to move from microservices to a monolith.

I'm not either, but it's interesting getting their point of view and reading the associated comments on HN. Things to think about.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Love Stole the Day posted:

I feel bad about the whole Data Science thing because as a math grad who also passed a really hard finance professional exam you'd think I would have a much better chance at getting a job doing that stuff because of the whole statistics thing, but I'm already so far down the rabbit holes of web dev and game dev that as I sit in traffic with everyone else trying to find a job in those industries I'm just rubber necking the Data Science lane watching everybody else go by. I'm tempted to want to try and switch lanes to join them, but I imagine it'll be like in the beginning of Office Space where as soon as I have a portfolio of decent projects under my belt the traffic will be backed up just like it was when I finally got some decent web dev and game dev projects in my portfolio. So instead I'm just sitting here stuck in the gridlock waiting for my chance to get an offer watching other stuff go by and feeling bad about everything telling myself that maybe it's just better this way.

Yeah it's really strange, I get approached for BI jobs like monthly, but that's not really my bag, I'm an implementer and administrator, and yes I've set up Cognos cubes, Netsuite Analytics, Adobe, and even SAP/Peoplesoft on the ERP side. But I'm not the guy who knows the best practices/analysis for your business, I'm the guy who knows a lot about databases and servers who make sure it works in the first place. I blame recruiters looking for software packages instead of actual experience.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Volguus posted:

No, rewriting git history is not a common thing, normally. There was (still is?), however, a fad going on that advocates doing exactly that on a daily basis. That is, the workflow comes and says: instead of using git properly (you know, create branches for the poo poo you're working on, make sure you're alone in said branch), how about you do your work-in-progress commits (the little ones, that we do all the time) in a "main enough" branch that everyone is working on, and when you're done, just rewrite history (push --force) to make the git history look nicer.
And oh, here's how you deal with the inevitable shitstorm that will happen once you obliterate your coworker's work.

It's hosed up, but it exists. On this very forum i saw people advocating that.

I've literally never seen anyone advocate this, so I'm pretty sure you're just misunderstanding people talking about doing WIP commits on a personal branch that you then rewrite into something sensible before mixing it with other peoples' code.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Yeah, I have never seen anyone recommend using rebase or push --force on a shared branch.

Adbot
ADBOT LOVES YOU

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I forced push to a shared branch once when I checked in a privkey accidentally. That's the only time I've done it, and I rewrite history a ton

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply