|
nielsm posted:To display a map with your own overlay, you can look into OpenStreetMap. Edit: Basically unless you want to use google maps or mapbox, you should probably start by using leaflet, openlayers, or MapLibre with a tile provider (I think a bunch of them have free tiers), and then you can always switch between them or try to self host the tiles later. It's getting easier to self host the tiles so even if you don't want to that now it may be completely trivial soon and if you're using one of those map libraries it should be easy to switch whenever you want to. mystes fucked around with this message at 21:29 on Apr 5, 2024 |
# ? Apr 5, 2024 21:17 |
|
|
# ? Apr 24, 2024 22:20 |
|
CitizenKeen posted:
I wouldn't rush to swap to Vue, you can do a lot with simple HTML + HTMX with a little JavaScript. I've built a large ecom platform and we are removing all of our React and moving to more static file approach with HTMX. The performance increase in rendering is probably 100x.
|
# ? Apr 7, 2024 13:24 |
|
Funking Giblet posted:I wouldn't rush to swap to Vue, you can do a lot with simple HTML + HTMX with a little JavaScript. I've built a large ecom platform and we are removing all of our React and moving to more static file approach with HTMX. The performance increase in rendering is probably 100x. I did not keep up with the Javascript/Web stuff that much, but from what I've seen from HTMX it looks pretty elegant.
|
# ? Apr 14, 2024 18:58 |
|
If you already want to render everything on the server but then do progressive enhancement primarily through dynamically loading html fragments rather than reloading the whole page, like what used to be common, htmx is a very good option in 2024 for doing that without having to manually write any javascript, but imo it's not necessarily as novel or exciting as some people are making it out to be. I don't think it's helpful to even think of it as "vue" vs "htmx" as competing frameworks or something. The first question is whether you want to write something as an SPA / render stuff on the client (but maybe add back server rending using something like next.js for SEO or performance) or use traditional server rendering, and if you choose the latter, htmx may be useful. I do think that some of the htmx hype is because people who don't know what the web was like 20 years ago and are only used to stuff like react or vue are looking at traditional server rendering for the first time and only seeing the ways it seems to be simpler than having a separate frontend making api calls, but maybe not seeing the other ways that approach can actually be more complicated (e.g. having to have a bunch of view state temporarily stored on the server for each session when you're doing anything remotely complicated). But that's not to say that it's a bad approach either; it worked for a really long time and it can still work perfectly fine in 2024. mystes fucked around with this message at 19:20 on Apr 14, 2024 |
# ? Apr 14, 2024 19:09 |
|
Responses: Sorry, still looping this thread into my "check up on". My bad. Yeah, the page is mostly just data retrieval, with one huge form. That form might be better as an SPA, but I don't think it's worth turning the whole page into an SPA. I get a lot of positive feedback on how thin the site is, so I think I made the right call. I am running into a problem though, and I have no idea where to begin solving it. This is mostly devops, stuff, though - if there's a better thread, I apologize. I think I'm running into some scaling problems. I've gone from 1,000 users in February to 24,000 now, so it's been a bit wild. I'm seeing these spikes in my processor and my users are reporting intermittent 502s... https://imgur.com/a/gd3Z3W6 I have no idea how to debug that or where to begin. I'm running a .Net 7 Razor pages app on a Digital Ocean server with Postgres. That's about it, pretty bog standard. I've made a few small changes to the nginx config based on googling to increase some cache buffer sizes, but otherwise it's mostly out of the box. No cron jobs, no scheduled recurring processes. Just Entity Framework queries and a few views for intensive reads. (I'm also running a Discord bot using DSharpPlus, but that's mostly a wrapper around two commands which pull some basic static info from my site.) I don't know if Postgres was the right choice - it was free and its syntax matches SQL Server which I'm used to at work - because my understanding is that it's optimized for writes instead of reads and my site is pretty read heavy. I'm kind of lost on how to start tracking down why my server's CPU just starts running away. (It's an 8 gig / 4 Intel basic CPU box on Digital Ocean). I can afford to double those resources if I'm just running into resource limits, but I'm concerned it'll be like urban traffic and just swell to fill whatever new stuff I provision. Any thoughts on where to start investigating this? Please and thank you.
|
# ? Apr 22, 2024 19:08 |
|
can you set up your hosting thing to capture a process dump if cpu spikes above 90% or something like that? we've done that with our azure thing in the past, but no idea what sort of tooling digital ocean offers in that regard can you correlate request count or duration with the spikes? could there be users with an unusually large number of somethings? example: at a previous job we had strange intermittent perf problems that ultimately arose from a bot who had added thousands of items to its cart (normal number of items would have been less than 20 or something like that)
|
# ? Apr 23, 2024 07:52 |
|
redleader posted:can you set up your hosting thing to capture a process dump if cpu spikes above 90% or something like that? we've done that with our azure thing in the past, but no idea what sort of tooling digital ocean offers in that regard So the site is hosted on a DigitalOcean VM, so I have pretty hefty ability to add what I want to it, but it's just a hosted Linux box so I don't have true god mode. The problem is, I don't know what kind of tooling or tracing or anything I should be pursuing. I started this project to learn some industry practices that my management team doesn't value enough, but this scaled faster than I expected. All I've got in place is Serilog. I have a search syntax DSL where users can write very simple queries against the data; I'm wondering if I left some infinite regression in there or something. I'm going to have Serilog write out all queries to a separate log file so I can look over what kind of queries people are writing. Right now, the things I need to learn are:
I also need to set up a staging environment; that would probably alleviate some of my stress.
|
# ? Apr 23, 2024 16:23 |
You evaluate your code in production by instrumenting it with logging that captures data that can help answer your questions, without interrupting the normal request handling. What data your need to capture depends on the application. For example, capture the total request times (request received until response ready to be sent) together with some category of the requests, so you can measure whether you have outliers where some requests take unusually long time compared to other requests of the same type, or if the request processing time increases when the service is busy. Depending on the nature of the data you process, you can also try capturing request series and replay them in a testing environment, where you can more heavily instrument things. Figuring out whether it's worth optimizing your program more, or throw more hardware at it, is not a science. It's just as much a business tradeoff: What's more expensive, your development time, or your increased hosting bill? It's probably always a good idea to try to identity the cause of slowness, to decide if it's something you can reasonably improve on or not, and whether it's something that could prevent scaling to faster/wider hardware.
|
|
# ? Apr 23, 2024 16:51 |
|
nielsm posted:You evaluate your code in production by instrumenting it with logging that captures data that can help answer your questions, without interrupting the normal request handling. What data your need to capture depends on the application. Is that something I can do with Serilog, or is that a separate kind of logging?
|
# ? Apr 23, 2024 17:05 |
CitizenKeen posted:Is that something I can do with Serilog, or is that a separate kind of logging? Probably? The main thing is to make sure you can retrieve the logged data so you can graph or analyze it in a useful way. You also need to make sure the instrumentation logging doesn't drown out other operational or error logging, perhaps by logging them to separate streams/files/tables. If you need to do instrumentation that captures lots of data, it might be better to make a database table/whatever specifically for that data, so you have it structured, efficiently packed, and indexed.
|
|
# ? Apr 23, 2024 17:16 |
|
If you think the issue is with the user written queries to the database, you could enable some query logging for queries that are taking over a set amount of time. I'm pretty sure PGSQL has some settings to enable and capture this. Might be another option to help you figure out what's going wrong.
|
# ? Apr 23, 2024 17:22 |
|
Rawrbomb posted:If you think the issue is with the user written queries to the database, you could enable some query logging for queries that are taking over a set amount of time. I'm pretty sure PGSQL has some settings to enable and capture this. Might be another option to help you figure out what's going wrong. Oh yeah, that's probably a good idea; I'll investigate what PGSQL has. Then I'll have to parse backwards from the SQL that Postgres is running to the EntityFramework it's generating - that might be a mess, but it's my mess.
|
# ? Apr 23, 2024 18:56 |
|
|
# ? Apr 24, 2024 22:20 |
|
You can also configure entity framework to log all of its sql queries. https://learn.microsoft.com/en-us/ef/core/logging-events-diagnostics/
|
# ? Apr 23, 2024 20:11 |