Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
porksmash
Sep 30, 2008

Hadlock posted:

Ok I rewrote my app as a Django project with postgres back end. Right now via web UI you can create and edit "stack" model objects in the db, and view the stacks. Each stack has about 15 fields.

Would like to CRUD this model via API as well, so our CI/CD system can create them remotely, probably using basic auth via RESTful API.

I'm thinking that I use... Django-Tastypie? What is the best option here. Something dead simple, not doing anything fancy.

Also, I have a slow, external data source that I need to poll periodically, every 10 minutes, should this be another Django app inside the existing project, maybe call it sched, and then use sched to run these api calls to the slow data source in the background? What's the best pattern for that here.

Django REST Framework, hands down.

For simple scheduled jobs I'd probably write it as a management command and call it via cronjob. Alternatively, there's celery-beat - but that's probably not worth it for one/two scheduled tasks.

porksmash fucked around with this message at 05:29 on Nov 13, 2018

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

Django rest framework looks like the ticket, thanks

Cron would probably be simpler, but this needs to be totally self-contained in a container so I'm looking pretty hard at django_celery_beat, as it's easier to spin up a redis container than it is setup cron tbh

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i would like to randomly say out of nowhere that hypothesis is love, hypothesis is life

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
i found 90 bugs in the span of like 5 hours

Modulo16
Feb 12, 2014

"Authorities say the phony Pope can be recognized by his high-top sneakers and incredibly foul mouth."

Thanks all for the smokeping recommendation. I was able to sell it to the management that I could stand this up and Create lists from the SQL query. It works much better.

The DPRK
Nov 18, 2006

Lipstick Apathy
Hello all,

I started learning Python earlier on this year and I feel like I'm getting somewhere with it. I work for a small digital marketing company and found there are lots of opportunities to use coding to make life easier and provide solutions that I otherwise wouldn't be able to.

So far the most complex tool I've written is a tool that regularly scans client websites for the presence of "noindex", outputs the results to one of two Google sheets (1. all is good on this website 2. i found an error on this website) and if there is one present when it shouldn't be, I've set up one of the Google sheets to email me immediately (so we can email them immediately, and save them potentially loads in revenue from lost Google rankings).

My next project is a much more ambitious and I'm a bit stuck with how to tackle it, so if anyone has any thoughts I'd really like to hear them.

I'm planning to create a website for our clients (currently learning how to use Django but this may be overkill) that you log in to your with credentials and it shows you a dashboard of your website's SEO performance with a number of metrics (eg: how many of your keywords were on the 1st page this week, how many links you currently have coming in to your website etc.), and underneath that a list of tasks to complete to improve on this.

I'm currently following Youtuber CSDojo's guide to creating a to-do list app but I'm not sure how I'm going to tackle the data side. I suspect I'll be using matplotlib or another resource like it to graph the data but not sure how I might collect it, store it etc.

cinci zoo sniper
Mar 15, 2013




The DPRK posted:

Hello all,

I started learning Python earlier on this year and I feel like I'm getting somewhere with it. I work for a small digital marketing company and found there are lots of opportunities to use coding to make life easier and provide solutions that I otherwise wouldn't be able to.

So far the most complex tool I've written is a tool that regularly scans client websites for the presence of "noindex", outputs the results to one of two Google sheets (1. all is good on this website 2. i found an error on this website) and if there is one present when it shouldn't be, I've set up one of the Google sheets to email me immediately (so we can email them immediately, and save them potentially loads in revenue from lost Google rankings).

My next project is a much more ambitious and I'm a bit stuck with how to tackle it, so if anyone has any thoughts I'd really like to hear them.

I'm planning to create a website for our clients (currently learning how to use Django but this may be overkill) that you log in to your with credentials and it shows you a dashboard of your website's SEO performance with a number of metrics (eg: how many of your keywords were on the 1st page this week, how many links you currently have coming in to your website etc.), and underneath that a list of tasks to complete to improve on this.

I'm currently following Youtuber CSDojo's guide to creating a to-do list app but I'm not sure how I'm going to tackle the data side. I suspect I'll be using matplotlib or another resource like it to graph the data but not sure how I might collect it, store it etc.

You could check out Dash by Plot.ly to see if it’s still a healthy project - I did a nifty business intelligence website prototype for my company in it in under two weeks.

Collection depends on where you get data from, for storage use PostgreSQL or SQLite.

For general web visualisation quick and dirty you’ll probably be using Plot.ly either way, or maybe Bokeh.

The DPRK
Nov 18, 2006

Lipstick Apathy

cinci zoo sniper posted:

You could check out Dash by Plot.ly to see if it’s still a healthy project - I did a nifty business intelligence website prototype for my company in it in under two weeks.

Collection depends on where you get data from, for storage use PostgreSQL or SQLite.

For general web visualisation quick and dirty you’ll probably be using Plot.ly either way, or maybe Bokeh.

Thanks for the suggestion. I’ll look up Dash. :)

Any chance I could see a version of your Dash project?

I will be getting a lot of data from Moz and some from Analytics and Search Console. I’m a bit clueless about how Postgres and SQLite differ but I guess I’ll have to learn haha

cinci zoo sniper
Mar 15, 2013




The DPRK posted:

Thanks for the suggestion. I’ll look up Dash. :)

Any chance I could see a version of your Dash project?

Can't share implementation specifics unfortunately, NDA. In the essence it's not too dissimilar from this official Dash example. I had to provide dashboards for several projects, so I did a plain landing page that would point to to individual dashboards, each running as a segregated application. These applications would all talk to the same database server, that would itself automatically fetch new data at regular intervals.

The DPRK posted:

I will be getting a lot of data from Moz and some from Analytics and Search Console. I’m a bit clueless about how Postgres and SQLite differ but I guess I’ll have to learn haha

I'v not worked much with Moz & friends, but the general scenario should be similar in all such projects:

1) Extract data from source systems.
2) Mangle it to fit tables you have defined.
3) Load them into your database.
4) Point your web application to database to fetch data from.

You definitely want to learn at least some basics of "admin side" of SQL stuff if you're dealing with projects like described. PostgreSQL and SQLite are two open source RDBMS that should together cover ground for most, if not any, analytical projects you'd normally be doing without support team to take care of this part of the equation. As for choosing between the two, this will get you started. Ignore MySQL, even though MySQL 8 may not be absolute dogshit when it comes for analytical use cases (its also new and relatively untested compared to MySQl 5 everyone still has in mind).

cinci zoo sniper fucked around with this message at 20:32 on Nov 17, 2018

The DPRK
Nov 18, 2006

Lipstick Apathy

cinci zoo sniper posted:

Can't share implementation specifics unfortunately, NDA. In the essence it's not too dissimilar from this official Dash example. I had to provide dashboards for several projects, so I did a plain landing page that would point to to individual dashboards, each running as a segregated application. These applications would all talk to the same database server, that would itself automatically fetch new data at regular intervals.


I'v not worked much with Moz & friends, but the general scenario should be similar in all such projects:

1) Extract data from source systems.
2) Mangle it to fit tables you have defined.
3) Load them into your database.
4) Point your web application to database to fetch data from.

You definitely want to learn at least some basics of "admin side" of SQL stuff if you're dealing with projects like described. PostgreSQL and SQLite are two open source RDBMS that should together cover ground for most, if not any, analytical projects you'd normally be doing without support team to take care of this part of the equation. As for choosing between the two, this will get you started. Ignore MySQL, even though MySQL 8 may not be absolute dogshit when it comes for analytical use cases (its also new and relatively untested compared to MySQl 5 everyone still has in mind).

Super! Thanks for this. :)

The DPRK
Nov 18, 2006

Lipstick Apathy
After working through a bunch of guides I've got a fairly basic understanding of how I might do this. Some aspects of it like creating different views for different users will have to wait til much later. :D

I managed to create a working version showing keyword performance over time using a .csv I created myself. Right now I'm not sure how I'm going to automate this whole process since the initial API pull is much messier than the .csv I eventually end up using (which looks this one from one of the guides: https://raw.githubusercontent.com/plotly/datasets/master/1962_2006_walmart_store_openings.csv )

I'm guessing I just need to get acquainted with pandas and learn how to make it ignore all the useless columns?

The DPRK fucked around with this message at 17:45 on Nov 20, 2018

cinci zoo sniper
Mar 15, 2013




The DPRK posted:

After working through a bunch of guides I've got a fairly basic understanding of how I might do this. Some aspects of it like creating different views for different users will have to wait til much later. :D

I managed to create a working version showing keyword performance over time using a .csv I created myself. Right now I'm not sure how I'm going to automate this whole process since the initial API pull is much messier than the .csv I eventually end up using (which look this one from one of the guides: https://raw.githubusercontent.com/plotly/datasets/master/1962_2006_walmart_store_openings.csv )

Nice! One step at a time is a good way to go, and bothering with views and ACLs is not really necessary to get running a prototype where users can access only web GUI.

The DPRK posted:

I guess I just need to get acquainted with pandas and learn how to make it ignore all the useless columns?

Yes, do that. If you get your API pull return you something table-ish then you are good to go with pandas right away, otherwise make it construct you a list of dictionaries, for the first version, and pandas can handle that just as fine. I'm not sure how comfortable you are with pandas, but I assume this link won't hurt.

The DPRK
Nov 18, 2006

Lipstick Apathy

cinci zoo sniper posted:

Yes, do that. If you get your API pull return you something table-ish then you are good to go with pandas right away, otherwise make it construct you a list of dictionaries, for the first version, and pandas can handle that just as fine. I'm not sure how comfortable you are with pandas, but I assume this link won't hurt.

That's perfect, thank you!

mr_package
Jun 13, 2000
I use type hints and have an issue where 'with open(file_path)' syntax expects os.PathLike (or str/bytes) but other functions require Path object, e.g. file_path.is_dir() so if I have a function like this Pycharm starts to complain (even though the code would actually run fine) unless I do a Union type hint to include both:

code:
def open_file(file_path: Union[PathLike, Path]):
    if file_path.is_file():
        with open(file_path) as f:
            pass
Is this a bug in Pycharm's type hinting, or is it a Python bug where open() doesn't explicitly support pathlib.Path? Is that intended, will it always stay this way? What I mean is whether when using pathlib I should expect to have to do this forever or if it seems likely it will be fixed/improved in a future release and I should just set the type hint to Path and ignore the warning for now?

It seems there's an issue that open() just accepts so many things type hinting is troublesome (I have seen discussions online where people just say don't type hint on open but that doesn't work if you are defining types in a function as I'm doing here; maybe it works in places where you use inline typing e.g. # type: Path)

cinci zoo sniper
Mar 15, 2013




mr_package posted:

I use type hints and have an issue where 'with open(file_path)' syntax expects os.PathLike (or str/bytes) but other functions require Path object, e.g. file_path.is_dir() so if I have a function like this Pycharm starts to complain (even though the code would actually run fine) unless I do a Union type hint to include both:

code:
def open_file(file_path: Union[PathLike, Path]):
    if file_path.is_file():
        with open(file_path) as f:
            pass
Is this a bug in Pycharm's type hinting, or is it a Python bug where open() doesn't explicitly support pathlib.Path? Is that intended, will it always stay this way? What I mean is whether when using pathlib I should expect to have to do this forever or if it seems likely it will be fixed/improved in a future release and I should just set the type hint to Path and ignore the warning for now?

It seems there's an issue that open() just accepts so many things type hinting is troublesome (I have seen discussions online where people just say don't type hint on open but that doesn't work if you are defining types in a function as I'm doing here; maybe it works in places where you use inline typing e.g. # type: Path)

A bit too late in the day for me to try to properly reason about what's happening with the types there, but you can trivially circumvent the issue by using built-in open():

code:
from pathlib import Path


def open_file(file_path: Path):
    if file_path.is_file():
        with file_path.open() as f:
            print(f)
    return

mr_package
Jun 13, 2000
That is perfect, thanks!!!
edit: read_bytes read_text write_bytes write_text look useful too.

mr_package fucked around with this message at 21:35 on Nov 20, 2018

mr_package
Jun 13, 2000
What is the rule with regards to functions that should return something useful but could in some circumstances return something (generally) useless? For example I have some code that should return a dictionary with data in it. If a user passes in 'bad' arguments it can return an empty dictionary. In that case is it better to raise ValueError, return None, or return the empty dictionary and let the caller handle it?

Same applies to functions that read files from disk. Currently if they pass in a non existing file path they get an empty string. Should my function raise FileNotFoundError, return None, or continue as it is and have them either test the path is good before passing to my function or test the result they got back is useful/expected?

If I raise an error, in the rare case someone wants this empty value (or more specifically the knowledge that nothing useful was found), they will need to try/catch the Exception. I'm wondering if None is best compromise since that's default return value in Python world anyway, and any code that is trying to use it blindly will quickly blow up e.g. TypeError typically.

QuarkJets
Sep 8, 2008

I think assertions and exceptions are better simply because they clearly assign responsibility to the calling function to either verify inputs or deal with the fallout when something goes wrong. Your function should state its assumptions and raise an exception if they're violated. This is also why you shouldn't use blank or overly broad except blocks

The worst thing is for a function to fail silently and return something that you didn't expect, since that can create unpredictable results that need to be debugged. An exception should make it immediately clear as to what happened and why, but returning a default value can result in hours of troubleshooting to figure out why something is behaving weirdly (and you may not immediately discover the weird behavior, further complicating issues).

As a brief example consider a function that updates a database and returns an integer with the number of updated rows. If the database update fails due to a syntax error, that should raise an exception; you want to know immediately what failed and why. But you could instead fail silently and just return 0 as the number of updated rows; this is technically correct, as 0 rows were updated due to the syntax error, but probably you don't actually want this application to run for some days or weeks while not actually doing its job. Eventually it's discovered that the database hasn't been getting updated in the correct way, and now time has to be spent tracking down why, when an exception would have immediately revealed the problem.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

I feel like it's good to use all three (dunno if this is the Python way or anything)
  • data - you asked for something, I got it, here it is. May be empty
    e.g. file exists, but there's nothing in it so here's an empty string, or you fetch some data for a person and get an empty list because they have no entries for that field yet, etc
  • failure type - None can work for this, just something that represents a "couldn't get that data" state - an expected failure, distinct from success where the result happens to be empty
    e.g. the specified file doesn't exist, or there's no person matching that lookup value
  • exception - something went wrong, and I couldn't actually perform the task even though I should have been able to, and you'll probably want to handle this situation gracefully
    e.g. a file that should be there isn't for some reason, or a lookup failed to even run (maybe a parsing error)

Like QuarkJets says, if things throw exceptions then you can identify actual "oh no" problems, or you explicitly handle them (say if there's a network failure) which is probably gonna be a different path from when everything works. Having a failure state like None makes it clear that there's no actual result, and you can decide what to do in that case - you might handle it exactly the same as for an empty value, but if "empty data" and "no data at all" are distinct results then it can help to be explicit about what can happen and what you want to do about it


to flip it around - imagine you had a car's VIN and wanted to look up all the incidents it's been involved in by querying a remote site. You type in the VIN and get back an empty list or string or whatever. Does this mean
  1. the car was found and there are no incidents listed, yay
  2. there's no data for that car, so we have no idea either way - or maybe you typed the VIN wrong
  3. there was an error connecting to the site, so you should check your internet or try again later

baka kaba fucked around with this message at 00:26 on Nov 22, 2018

Dominoes
Sep 20, 2007

I don't have anything to add over how to best handle this, but Rust has an interesting approach: The Result type. Something like your dict-returning function would instead always return an Option<dict>; that value would then be handled with pattern matching based on Ok(the valid dict), or Err(the error message)

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
Some offhand :words: on error handling: I use assert liberally, usually at the top of functions and whenever I make an unpure call. It's a habit I picked up from C and sort of from C# contracts. I like doing this because I can focus on the constraints of what I'm writing without taking the time to figure out how I should handle the failure, if that makes sense. I can just grep them later and fill it in once I figure it out. It also provides a relatively specific message for higher level code.

QuarkJets
Sep 8, 2008

I've been trying to get into the habit of using assertions, because there have been times in the past where they definitely wound up saving me time later. It's nice to have a function's assumptions placed right at the top of the function

cinci zoo sniper
Mar 15, 2013




Joining confession time on failure mode handling, I have to admit that I work on logging only if the end user is not me or after problem happens, which is not a very good strategy as you may imagine.

CarForumPoster
Jun 26, 2013

⚡POWER⚡
I have a dumb question, at some point in developing stuff does it become more natural to read and work with json? Like are there some of you who can deal with json as intuitively as you would data in a table/dataframe?

cinci zoo sniper
Mar 15, 2013




Eventually you don’t give a poo poo or bat an eye, but there can always be a mangled document, or just one complex enough that it will take time or effort to get it into tabular form, if that is what you lead into. Despite having JSON sources in each work project, however, and being comfortable working with it, I still loathe the format immensely.

huhu
Feb 24, 2006

CarForumPoster posted:

I have a dumb question, at some point in developing stuff does it become more natural to read and work with json? Like are there some of you who can deal with json as intuitively as you would data in a table/dataframe?

Having a way to view it can helpful. Something like this https://chrome.google.com/webstore/detail/json-formatter/bcjindcccaagfpapjjmafapmmgkkhgoa?hl=en or a plugin for your text editor make it easier to read. I like being able to collapse sections of a JSON doc to focus on what I need.

There's also not a native way to do this, but I really like JavaScript's destructuring and found a lambda function in Python that can do it that makes it easier to split apart huge json blobs into more manageable chunks.

code:
>>> import json
>>> json_string = '{"first_name": "Guido", "last_name":"Rossum"}'
>>> parsed_json = json.loads(json_string)
>>> pluck = lambda dict, *args: (dict[arg] for arg in args)
>>> first_name, last_name = pluck(parsed_json, 'first_name', 'last_name')
>>> first_name
'Guido'

huhu fucked around with this message at 20:03 on Nov 22, 2018

Wallet
Jun 19, 2006

CarForumPoster posted:

I have a dumb question, at some point in developing stuff does it become more natural to read and work with json? Like are there some of you who can deal with json as intuitively as you would data in a table/dataframe?

I find JSON rather convenient and straightforward to work with, but it's definitely not the format I would chose if being able to conveniently read the data is important. When people try to get too cute with it and use it to store data that is irregular or deeply nested it quickly turns into a massive ball-ache.

cinci zoo sniper posted:

Despite having JSON sources in each work project, however, and being comfortable working with it, I still loathe the format immensely.

What makes you loathe it, out of interest?

cinci zoo sniper
Mar 15, 2013




Wallet posted:

What makes you loathe it, out of interest?

My use case is consumption of complex documents, and I'm not a web developer, so it's lack of quite literally everything that makes XML "bloated" - schema standard, querying standard, metadata, custom data types, namespaces, comments. Hell, even CDATA although it's often a container for war crimes against common sense. JSON does not offer a single decisive advantage that I can think of.

Methanar
Sep 26, 2013

by the sex ghost
I convert yaml to json so I can actually read it

Data Graham
Dec 28, 2009

📈📊🍪😋



Client-side validation is a gateway drug

Wallet
Jun 19, 2006

cinci zoo sniper posted:

My use case is consumption of complex documents, and I'm not a web developer, so it's lack of quite literally everything that makes XML "bloated" - schema standard, querying standard, metadata, custom data types, namespaces, comments. Hell, even CDATA although it's often a container for war crimes against common sense. JSON does not offer a single decisive advantage that I can think of.

That makes sense. I feel like the decisive advantage of JSON is that it's a quick, straightforward way to dump/store/load simple data. I'm currently dealing with the mess left by a bunch of idiots who decided to use Mongo for a project that desperately needed a relational database, so I am entirely sympathetic to the hell people create when they try to use tools that are too simple for the job.

cinci zoo sniper
Mar 15, 2013




Wallet posted:

That makes sense. I feel like the decisive advantage of JSON is that it's a quick, straightforward way to dump/store/load simple data. I'm currently dealing with the mess left by a bunch of idiots who decided to use Mongo for a project that desperately needed a relational database, so I am entirely sympathetic to the hell people create when they try to use tools that are too simple for the job.

Yeah for straightforward transport, especially internally, I’d JSON too. Also condolences on Mongo, my company did the same poo poo until I came along to yell.

Methanar
Sep 26, 2013

by the sex ghost
Mongo and javascript are really cool.

We have some data in a collection that is both a string and object, depending on which platform the user was on when the data was created because mongoDB lets you Move Fast and gently caress Up Your Data and the nodejs mongo driver lets it work by accident anyway.

By the way go gently caress yourself if anything else needs to ever access the data

Methanar
Sep 26, 2013

by the sex ghost
A database whose entire appeal that lets you Move Fast and Break Things and Accelerate Developer Velocity and be Agile by letting you get away in the early days with having zero schema, plan, or interdeveloper communication about what the gently caress you're even doing is really cool and good

cinci zoo sniper
Mar 15, 2013




I had the delight to try to build analytics on documents originally in XML poorly converted to JSON and stored in Mongo, with no original XMLs available. And by poorly converted I mean that I had to write several query scripts just to deal with the fact that the record ID number value could be stored differently.

ynohtna
Feb 16, 2007

backwoods compatible
Illegal Hen
MongoDB: what your data deserves.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Hadlock posted:

Django rest framework looks like the ticket, thanks

Cron would probably be simpler, but this needs to be totally self-contained in a container so I'm looking pretty hard at django_celery_beat, as it's easier to spin up a redis container than it is setup cron tbh

FYI, python-rq will fit your needs.

Wallet
Jun 19, 2006

Methanar posted:

Mongo and javascript are really cool.

We have some data in a collection that is both a string and object, depending on which platform the user was on when the data was created because mongoDB lets you Move Fast and gently caress Up Your Data and the nodejs mongo driver lets it work by accident anyway.

By the way go gently caress yourself if anything else needs to ever access the data

Basically this, over and over again forever. The best way to store a date? Clearly a string, except when you feel like using an ISODate. Want to record whether something is supposed to be on or off? Use a string and set the value to "on" or "off", obviously. Want a bunch of strings all bundled together, possibly in order? Use an object, and make sure that all of its properties have names that include apostrophes and dashes, then use them to store your strings. Want to record the ID of a related record and what type of relationship the current record has to it? Don't just use a plain old string—instead, use the ID of the related record as the field name and then store the kind of relationship in that field. What could go wrong?

Wallet fucked around with this message at 01:39 on Nov 24, 2018

Dominoes
Sep 20, 2007

Wallet posted:

Basically this, over and over again forever. The best way to store a date? Clearly a string, except when you feel like using an ISODate. Want to record whether something is supposed to be on or off? Use a string and set the value to "on" or "off", obviously. Want a bunch of strings all bundled together, possibly in order? Use an object, and make sure that all of its properties have names that include apostrophes and dashes, then use them to store your strings. Want to record the ID of a related record and what type of relationship the current record has to it? Don't just use a plain old string—instead, use the ID of the related record as the field name and then store the kind of relationship in that field. What could go wrong?
I use Strings in JS for pure dates because neither the internal Date module, nor the popular Moment lib support them! The community doesn't seem to mind, but seems nut to me.

Dominoes fucked around with this message at 02:06 on Nov 24, 2018

Adbot
ADBOT LOVES YOU

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
BSON is kinda silly. Auditing mongoDB databases is fun (because you can charge more). My DB class in college was 90% mongoDB and also 100% a waste of time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply